Test Report: Docker_Linux_containerd_arm64 22049

                    
                      b350bc6d66813cad84bbff620e1b65ef38f64c38:2025-12-06:42657
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 501.99
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.74
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.32
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.37
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.4
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.97
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.19
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.75
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.06
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.5
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.7
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.43
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.14
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 88.17
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.29
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.27
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.35
358 TestKubernetesUpgrade 807.49
420 TestStartStop/group/no-preload/serial/FirstStart 510.72
437 TestStartStop/group/newest-cni/serial/FirstStart 503.27
438 TestStartStop/group/no-preload/serial/DeployApp 3.09
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 109.8
442 TestStartStop/group/no-preload/serial/SecondStart 370.66
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 107.99
447 TestStartStop/group/newest-cni/serial/SecondStart 373.54
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.68
452 TestStartStop/group/newest-cni/serial/Pause 9.93
473 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 263.36
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 08:39:19.930450    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:41:36.065753    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:03.776937    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.331506    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.337857    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.349359    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.370810    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.412272    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.493653    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.655352    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.977062    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:58.618600    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:59.900030    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:02.461492    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:07.583416    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:17.825584    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:38.307872    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:44:19.270927    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:45:41.192412    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:46:36.062179    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.494585893s)

                                                
                                                
-- stdout --
	* [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Found network options:
	  - HTTP_PROXY=localhost:37029
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:37029 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310933s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001195219s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001195219s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 6 (330.475238ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 08:47:09.893250   48391 status.go:458] kubeconfig endpoint: get endpoint: "functional-090986" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-181746 --kill=true                                                                                                                        │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ addons         │ functional-181746 addons list                                                                                                                           │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ addons         │ functional-181746 addons list -o json                                                                                                                   │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service hello-node-connect --url                                                                                                      │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ start          │ -p functional-181746 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                   │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ service        │ functional-181746 service list                                                                                                                          │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-181746 --alsologtostderr -v=1                                                                                          │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service list -o json                                                                                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service --namespace=default --https --url hello-node                                                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service hello-node --url --format={{.IP}}                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service hello-node --url                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format short --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format yaml --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ ssh            │ functional-181746 ssh pgrep buildkitd                                                                                                                   │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ image          │ functional-181746 image build -t localhost/my-image:functional-181746 testdata/build --alsologtostderr                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format json --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format table --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls                                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ delete         │ -p functional-181746                                                                                                                                    │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:38:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:38:49.100563   42853 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:38:49.100665   42853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:49.100668   42853 out.go:374] Setting ErrFile to fd 2...
	I1206 08:38:49.100674   42853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:49.101085   42853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:38:49.101920   42853 out.go:368] Setting JSON to false
	I1206 08:38:49.102709   42853 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1280,"bootTime":1765009049,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:38:49.102766   42853 start.go:143] virtualization:  
	I1206 08:38:49.107187   42853 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:38:49.111876   42853 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:38:49.111976   42853 notify.go:221] Checking for updates...
	I1206 08:38:49.119282   42853 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:38:49.122526   42853 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:38:49.125743   42853 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:38:49.128895   42853 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:38:49.131988   42853 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:38:49.135284   42853 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:38:49.154349   42853 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:38:49.154459   42853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:38:49.216433   42853 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 08:38:49.207353366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:38:49.216521   42853 docker.go:319] overlay module found
	I1206 08:38:49.219797   42853 out.go:179] * Using the docker driver based on user configuration
	I1206 08:38:49.222835   42853 start.go:309] selected driver: docker
	I1206 08:38:49.222843   42853 start.go:927] validating driver "docker" against <nil>
	I1206 08:38:49.222865   42853 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:38:49.223700   42853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:38:49.276155   42853 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 08:38:49.266928356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:38:49.276307   42853 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:38:49.276529   42853 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:38:49.279516   42853 out.go:179] * Using Docker driver with root privileges
	I1206 08:38:49.282477   42853 cni.go:84] Creating CNI manager for ""
	I1206 08:38:49.282544   42853 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:38:49.282551   42853 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 08:38:49.282659   42853 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:38:49.285936   42853 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:38:49.289083   42853 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:38:49.292088   42853 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:38:49.295021   42853 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:38:49.295058   42853 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:38:49.295066   42853 cache.go:65] Caching tarball of preloaded images
	I1206 08:38:49.295103   42853 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:38:49.295146   42853 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:38:49.295155   42853 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:38:49.295595   42853 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:38:49.295614   42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json: {Name:mk3148d8af8d6ef4b551b6331eae19668215bd59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:38:49.314580   42853 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:38:49.314590   42853 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:38:49.314616   42853 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:38:49.314637   42853 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:38:49.314745   42853 start.go:364] duration metric: took 94.08µs to acquireMachinesLock for "functional-090986"
	I1206 08:38:49.314776   42853 start.go:93] Provisioning new machine with config: &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 08:38:49.314841   42853 start.go:125] createHost starting for "" (driver="docker")
	I1206 08:38:49.318155   42853 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1206 08:38:49.318405   42853 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:37029 to docker env.
	I1206 08:38:49.318428   42853 start.go:159] libmachine.API.Create for "functional-090986" (driver="docker")
	I1206 08:38:49.318450   42853 client.go:173] LocalClient.Create starting
	I1206 08:38:49.318528   42853 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 08:38:49.318561   42853 main.go:143] libmachine: Decoding PEM data...
	I1206 08:38:49.318574   42853 main.go:143] libmachine: Parsing certificate...
	I1206 08:38:49.318642   42853 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 08:38:49.318662   42853 main.go:143] libmachine: Decoding PEM data...
	I1206 08:38:49.318673   42853 main.go:143] libmachine: Parsing certificate...
	I1206 08:38:49.319017   42853 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 08:38:49.333355   42853 cli_runner.go:211] docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 08:38:49.333421   42853 network_create.go:284] running [docker network inspect functional-090986] to gather additional debugging logs...
	I1206 08:38:49.333443   42853 cli_runner.go:164] Run: docker network inspect functional-090986
	W1206 08:38:49.349114   42853 cli_runner.go:211] docker network inspect functional-090986 returned with exit code 1
	I1206 08:38:49.349134   42853 network_create.go:287] error running [docker network inspect functional-090986]: docker network inspect functional-090986: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-090986 not found
	I1206 08:38:49.349147   42853 network_create.go:289] output of [docker network inspect functional-090986]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-090986 not found
	
	** /stderr **
	I1206 08:38:49.349250   42853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:38:49.370202   42853 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400193c550}
	I1206 08:38:49.370235   42853 network_create.go:124] attempt to create docker network functional-090986 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 08:38:49.370287   42853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-090986 functional-090986
	I1206 08:38:49.436785   42853 network_create.go:108] docker network functional-090986 192.168.49.0/24 created
	I1206 08:38:49.436806   42853 kic.go:121] calculated static IP "192.168.49.2" for the "functional-090986" container
	I1206 08:38:49.436893   42853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 08:38:49.452228   42853 cli_runner.go:164] Run: docker volume create functional-090986 --label name.minikube.sigs.k8s.io=functional-090986 --label created_by.minikube.sigs.k8s.io=true
	I1206 08:38:49.469493   42853 oci.go:103] Successfully created a docker volume functional-090986
	I1206 08:38:49.469571   42853 cli_runner.go:164] Run: docker run --rm --name functional-090986-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-090986 --entrypoint /usr/bin/test -v functional-090986:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 08:38:50.041707   42853 oci.go:107] Successfully prepared a docker volume functional-090986
	I1206 08:38:50.041767   42853 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:38:50.041776   42853 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 08:38:50.041858   42853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-090986:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 08:38:54.065453   42853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-090986:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.02356114s)
	I1206 08:38:54.065476   42853 kic.go:203] duration metric: took 4.023696s to extract preloaded images to volume ...
	W1206 08:38:54.065639   42853 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 08:38:54.065769   42853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 08:38:54.122100   42853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-090986 --name functional-090986 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-090986 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-090986 --network functional-090986 --ip 192.168.49.2 --volume functional-090986:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 08:38:54.427567   42853 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Running}}
	I1206 08:38:54.446501   42853 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:38:54.476096   42853 cli_runner.go:164] Run: docker exec functional-090986 stat /var/lib/dpkg/alternatives/iptables
	I1206 08:38:54.529518   42853 oci.go:144] the created container "functional-090986" has a running status.
	I1206 08:38:54.529538   42853 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa...
	I1206 08:38:55.213719   42853 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 08:38:55.234130   42853 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:38:55.252089   42853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 08:38:55.252100   42853 kic_runner.go:114] Args: [docker exec --privileged functional-090986 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 08:38:55.293105   42853 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:38:55.311431   42853 machine.go:94] provisionDockerMachine start ...
	I1206 08:38:55.311556   42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:38:55.328516   42853 main.go:143] libmachine: Using SSH client type: native
	I1206 08:38:55.328852   42853 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:38:55.328859   42853 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:38:55.329544   42853 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49280->127.0.0.1:32788: read: connection reset by peer
	I1206 08:38:58.482997   42853 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:38:58.483011   42853 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:38:58.483070   42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:38:58.500584   42853 main.go:143] libmachine: Using SSH client type: native
	I1206 08:38:58.500890   42853 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:38:58.500898   42853 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:38:58.665277   42853 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:38:58.665346   42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:38:58.683359   42853 main.go:143] libmachine: Using SSH client type: native
	I1206 08:38:58.683859   42853 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:38:58.683873   42853 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:38:58.835695   42853 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:38:58.835711   42853 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:38:58.835731   42853 ubuntu.go:190] setting up certificates
	I1206 08:38:58.835740   42853 provision.go:84] configureAuth start
	I1206 08:38:58.835805   42853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:38:58.854225   42853 provision.go:143] copyHostCerts
	I1206 08:38:58.854290   42853 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:38:58.854297   42853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:38:58.854375   42853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:38:58.854472   42853 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:38:58.854477   42853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:38:58.854514   42853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:38:58.854608   42853 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:38:58.854612   42853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:38:58.854638   42853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:38:58.854698   42853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:38:59.087139   42853 provision.go:177] copyRemoteCerts
	I1206 08:38:59.087192   42853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:38:59.087231   42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:38:59.104115   42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:38:59.211112   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:38:59.229566   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:38:59.247321   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 08:38:59.264487   42853 provision.go:87] duration metric: took 428.723643ms to configureAuth
	I1206 08:38:59.264504   42853 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:38:59.264683   42853 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:38:59.264689   42853 machine.go:97] duration metric: took 3.953248131s to provisionDockerMachine
	I1206 08:38:59.264694   42853 client.go:176] duration metric: took 9.946239466s to LocalClient.Create
	I1206 08:38:59.264717   42853 start.go:167] duration metric: took 9.946287982s to libmachine.API.Create "functional-090986"
	I1206 08:38:59.264723   42853 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:38:59.264732   42853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:38:59.264783   42853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:38:59.264830   42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:38:59.281845   42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:38:59.387952   42853 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:38:59.391227   42853 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:38:59.391245   42853 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:38:59.391255   42853 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:38:59.391312   42853 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:38:59.391420   42853 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:38:59.391514   42853 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:38:59.391556   42853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:38:59.399349   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:38:59.417352   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:38:59.435604   42853 start.go:296] duration metric: took 170.868601ms for postStartSetup
	I1206 08:38:59.435971   42853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:38:59.452907   42853 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:38:59.453175   42853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:38:59.453212   42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:38:59.470399   42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:38:59.572241   42853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:38:59.576904   42853 start.go:128] duration metric: took 10.262050596s to createHost
	I1206 08:38:59.576919   42853 start.go:83] releasing machines lock for "functional-090986", held for 10.262167675s
	I1206 08:38:59.577014   42853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:38:59.598202   42853 out.go:179] * Found network options:
	I1206 08:38:59.601134   42853 out.go:179]   - HTTP_PROXY=localhost:37029
	W1206 08:38:59.603909   42853 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1206 08:38:59.606754   42853 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1206 08:38:59.609670   42853 ssh_runner.go:195] Run: cat /version.json
	I1206 08:38:59.609712   42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:38:59.609746   42853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:38:59.609794   42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:38:59.626671   42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:38:59.627953   42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:38:59.731365   42853 ssh_runner.go:195] Run: systemctl --version
	I1206 08:38:59.849305   42853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 08:38:59.853626   42853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:38:59.853689   42853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:38:59.881938   42853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 08:38:59.881952   42853 start.go:496] detecting cgroup driver to use...
	I1206 08:38:59.882006   42853 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:38:59.882060   42853 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:38:59.897313   42853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:38:59.910768   42853 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:38:59.910820   42853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:38:59.928532   42853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:38:59.947026   42853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:39:00.216371   42853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:39:00.399200   42853 docker.go:234] disabling docker service ...
	I1206 08:39:00.399267   42853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:39:00.429732   42853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:39:00.446210   42853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:39:00.575173   42853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:39:00.700025   42853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:39:00.713333   42853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:39:00.727750   42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:39:00.736846   42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:39:00.745664   42853 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:39:00.745731   42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:39:00.754556   42853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:39:00.763405   42853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:39:00.772176   42853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:39:00.781049   42853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:39:00.789239   42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:39:00.798100   42853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:39:00.807004   42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:39:00.816519   42853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:39:00.824172   42853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:39:00.831530   42853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:39:00.956548   42853 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:39:01.106967   42853 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:39:01.107037   42853 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:39:01.111627   42853 start.go:564] Will wait 60s for crictl version
	I1206 08:39:01.111696   42853 ssh_runner.go:195] Run: which crictl
	I1206 08:39:01.116064   42853 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:39:01.141058   42853 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:39:01.141136   42853 ssh_runner.go:195] Run: containerd --version
	I1206 08:39:01.165442   42853 ssh_runner.go:195] Run: containerd --version
	I1206 08:39:01.191176   42853 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:39:01.194282   42853 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:39:01.211170   42853 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:39:01.215490   42853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 08:39:01.225629   42853 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:39:01.225802   42853 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:39:01.225862   42853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:39:01.252238   42853 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:39:01.252255   42853 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:39:01.252324   42853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:39:01.283469   42853 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:39:01.283482   42853 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:39:01.283490   42853 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:39:01.283603   42853 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:39:01.283679   42853 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:39:01.310710   42853 cni.go:84] Creating CNI manager for ""
	I1206 08:39:01.310721   42853 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:39:01.310739   42853 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:39:01.310761   42853 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:39:01.310873   42853 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:39:01.310944   42853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:39:01.319058   42853 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:39:01.319133   42853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:39:01.327237   42853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:39:01.340797   42853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:39:01.354926   42853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 08:39:01.369139   42853 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:39:01.372872   42853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 08:39:01.382951   42853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:39:01.501239   42853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:39:01.517883   42853 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:39:01.517893   42853 certs.go:195] generating shared ca certs ...
	I1206 08:39:01.517909   42853 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:39:01.518072   42853 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:39:01.518123   42853 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:39:01.518130   42853 certs.go:257] generating profile certs ...
	I1206 08:39:01.518188   42853 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:39:01.518199   42853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt with IP's: []
	I1206 08:39:01.891340   42853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt ...
	I1206 08:39:01.891357   42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: {Name:mke1ec76aa123a8f6ce84cf3e07a24e13477f1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:39:01.891561   42853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key ...
	I1206 08:39:01.891568   42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key: {Name:mka00b3224bd4ccc89785c3a36f0add67caaa2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:39:01.891655   42853 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:39:01.891667   42853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt.e2062ee0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 08:39:02.140827   42853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt.e2062ee0 ...
	I1206 08:39:02.140854   42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt.e2062ee0: {Name:mk5d3e434d2ed04c59d8cd890b414cee687f2c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:39:02.141038   42853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0 ...
	I1206 08:39:02.141045   42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0: {Name:mkdb53fda8d1fb12536578975153ac76b8fcdeba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:39:02.141122   42853 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt.e2062ee0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt
	I1206 08:39:02.141205   42853 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key
	I1206 08:39:02.141257   42853 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:39:02.141268   42853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt with IP's: []
	I1206 08:39:02.450858   42853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt ...
	I1206 08:39:02.450872   42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt: {Name:mk6e54e0a470699c5c89b212ebe3736aaa06cad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:39:02.451070   42853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key ...
	I1206 08:39:02.451077   42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key: {Name:mk8296563eb31ce160c7e5f8e2c09f3b0879cdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:39:02.451283   42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:39:02.451321   42853 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:39:02.451328   42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:39:02.451353   42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:39:02.451400   42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:39:02.451425   42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:39:02.451469   42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:39:02.452045   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:39:02.472515   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:39:02.492103   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:39:02.510365   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:39:02.528770   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:39:02.547618   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:39:02.566394   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:39:02.584731   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:39:02.603528   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:39:02.621625   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:39:02.639932   42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:39:02.657943   42853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:39:02.671605   42853 ssh_runner.go:195] Run: openssl version
	I1206 08:39:02.677933   42853 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:39:02.685517   42853 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:39:02.693182   42853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:39:02.696989   42853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:39:02.697057   42853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:39:02.738905   42853 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:39:02.746676   42853 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 08:39:02.754291   42853 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:39:02.761963   42853 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:39:02.769900   42853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:39:02.773765   42853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:39:02.773819   42853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:39:02.815217   42853 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:39:02.822845   42853 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 08:39:02.830404   42853 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:39:02.837681   42853 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:39:02.845198   42853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:39:02.848890   42853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:39:02.848945   42853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:39:02.889819   42853 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:39:02.897367   42853 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 08:39:02.905530   42853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:39:02.909101   42853 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 08:39:02.909145   42853 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:39:02.909215   42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:39:02.909280   42853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:39:02.942248   42853 cri.go:89] found id: ""
	I1206 08:39:02.942308   42853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:39:02.950218   42853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:39:02.958089   42853 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 08:39:02.958144   42853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:39:02.965936   42853 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 08:39:02.965945   42853 kubeadm.go:158] found existing configuration files:
	
	I1206 08:39:02.966006   42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:39:02.974003   42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 08:39:02.974064   42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 08:39:02.981751   42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:39:02.991281   42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 08:39:02.991355   42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:39:03.002114   42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:39:03.011054   42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 08:39:03.011113   42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:39:03.019132   42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:39:03.027214   42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 08:39:03.027282   42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:39:03.035270   42853 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 08:39:03.142834   42853 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 08:39:03.143246   42853 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 08:39:03.222191   42853 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 08:43:07.123474   42853 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 08:43:07.123498   42853 kubeadm.go:319] 
	I1206 08:43:07.123717   42853 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 08:43:07.124435   42853 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 08:43:07.124487   42853 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 08:43:07.124599   42853 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 08:43:07.124669   42853 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 08:43:07.124710   42853 kubeadm.go:319] OS: Linux
	I1206 08:43:07.124768   42853 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 08:43:07.124824   42853 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 08:43:07.124870   42853 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 08:43:07.124927   42853 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 08:43:07.124980   42853 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 08:43:07.125031   42853 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 08:43:07.125079   42853 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 08:43:07.125129   42853 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 08:43:07.125178   42853 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 08:43:07.125256   42853 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 08:43:07.125360   42853 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 08:43:07.125457   42853 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 08:43:07.125526   42853 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 08:43:07.128298   42853 out.go:252]   - Generating certificates and keys ...
	I1206 08:43:07.128406   42853 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 08:43:07.128471   42853 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 08:43:07.128541   42853 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 08:43:07.128601   42853 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 08:43:07.128661   42853 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 08:43:07.128710   42853 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 08:43:07.128762   42853 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 08:43:07.128940   42853 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 08:43:07.129007   42853 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 08:43:07.129144   42853 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 08:43:07.129213   42853 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 08:43:07.129276   42853 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 08:43:07.129326   42853 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 08:43:07.129381   42853 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 08:43:07.129434   42853 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 08:43:07.129500   42853 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 08:43:07.129554   42853 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 08:43:07.129624   42853 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 08:43:07.129678   42853 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 08:43:07.129769   42853 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 08:43:07.129835   42853 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 08:43:07.132901   42853 out.go:252]   - Booting up control plane ...
	I1206 08:43:07.133011   42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 08:43:07.133118   42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 08:43:07.133188   42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 08:43:07.133315   42853 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 08:43:07.133422   42853 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 08:43:07.133536   42853 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 08:43:07.133628   42853 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 08:43:07.133666   42853 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 08:43:07.133815   42853 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 08:43:07.133928   42853 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 08:43:07.134004   42853 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001310933s
	I1206 08:43:07.134007   42853 kubeadm.go:319] 
	I1206 08:43:07.134063   42853 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 08:43:07.134102   42853 kubeadm.go:319] 	- The kubelet is not running
	I1206 08:43:07.134208   42853 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 08:43:07.134211   42853 kubeadm.go:319] 
	I1206 08:43:07.134316   42853 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 08:43:07.134346   42853 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 08:43:07.134381   42853 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 08:43:07.134437   42853 kubeadm.go:319] 
	W1206 08:43:07.134514   42853 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310933s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 08:43:07.134607   42853 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 08:43:07.545231   42853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:43:07.560225   42853 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 08:43:07.560282   42853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:43:07.568070   42853 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 08:43:07.568079   42853 kubeadm.go:158] found existing configuration files:
	
	I1206 08:43:07.568127   42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:43:07.576165   42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 08:43:07.576222   42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 08:43:07.583697   42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:43:07.591686   42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 08:43:07.591747   42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:43:07.599236   42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:43:07.607046   42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 08:43:07.607104   42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:43:07.614591   42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:43:07.622167   42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 08:43:07.622224   42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:43:07.629628   42853 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 08:43:07.667752   42853 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 08:43:07.668031   42853 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 08:43:07.745214   42853 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 08:43:07.745299   42853 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 08:43:07.745343   42853 kubeadm.go:319] OS: Linux
	I1206 08:43:07.745401   42853 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 08:43:07.745460   42853 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 08:43:07.745519   42853 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 08:43:07.745578   42853 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 08:43:07.745638   42853 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 08:43:07.745697   42853 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 08:43:07.745754   42853 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 08:43:07.745801   42853 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 08:43:07.745860   42853 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 08:43:07.821936   42853 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 08:43:07.822033   42853 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 08:43:07.822118   42853 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 08:43:07.828557   42853 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 08:43:07.834085   42853 out.go:252]   - Generating certificates and keys ...
	I1206 08:43:07.834169   42853 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 08:43:07.834233   42853 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 08:43:07.834308   42853 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 08:43:07.834367   42853 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 08:43:07.834435   42853 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 08:43:07.834488   42853 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 08:43:07.834554   42853 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 08:43:07.834614   42853 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 08:43:07.834687   42853 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 08:43:07.834767   42853 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 08:43:07.834870   42853 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 08:43:07.834937   42853 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 08:43:08.278422   42853 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 08:43:08.539294   42853 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 08:43:08.582158   42853 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 08:43:08.680522   42853 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 08:43:08.962582   42853 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 08:43:08.963384   42853 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 08:43:08.966092   42853 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 08:43:08.969419   42853 out.go:252]   - Booting up control plane ...
	I1206 08:43:08.969548   42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 08:43:08.969646   42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 08:43:08.969728   42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 08:43:08.991741   42853 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 08:43:08.992711   42853 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 08:43:09.001100   42853 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 08:43:09.001418   42853 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 08:43:09.001702   42853 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 08:43:09.132563   42853 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 08:43:09.132703   42853 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 08:47:09.133344   42853 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001195219s
	I1206 08:47:09.133363   42853 kubeadm.go:319] 
	I1206 08:47:09.133422   42853 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 08:47:09.133455   42853 kubeadm.go:319] 	- The kubelet is not running
	I1206 08:47:09.133559   42853 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 08:47:09.133563   42853 kubeadm.go:319] 
	I1206 08:47:09.134080   42853 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 08:47:09.134142   42853 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 08:47:09.134340   42853 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 08:47:09.134344   42853 kubeadm.go:319] 
	I1206 08:47:09.140050   42853 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 08:47:09.140467   42853 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 08:47:09.140573   42853 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 08:47:09.140836   42853 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 08:47:09.140841   42853 kubeadm.go:319] 
	I1206 08:47:09.140910   42853 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 08:47:09.140963   42853 kubeadm.go:403] duration metric: took 8m6.231822508s to StartCluster
	I1206 08:47:09.141009   42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:47:09.141070   42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:47:09.165494   42853 cri.go:89] found id: ""
	I1206 08:47:09.165513   42853 logs.go:282] 0 containers: []
	W1206 08:47:09.165520   42853 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:47:09.165525   42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:47:09.165591   42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:47:09.189702   42853 cri.go:89] found id: ""
	I1206 08:47:09.189715   42853 logs.go:282] 0 containers: []
	W1206 08:47:09.189722   42853 logs.go:284] No container was found matching "etcd"
	I1206 08:47:09.189727   42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:47:09.189789   42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:47:09.214580   42853 cri.go:89] found id: ""
	I1206 08:47:09.214593   42853 logs.go:282] 0 containers: []
	W1206 08:47:09.214601   42853 logs.go:284] No container was found matching "coredns"
	I1206 08:47:09.214606   42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:47:09.214665   42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:47:09.241367   42853 cri.go:89] found id: ""
	I1206 08:47:09.241392   42853 logs.go:282] 0 containers: []
	W1206 08:47:09.241400   42853 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:47:09.241406   42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:47:09.241510   42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:47:09.266821   42853 cri.go:89] found id: ""
	I1206 08:47:09.266834   42853 logs.go:282] 0 containers: []
	W1206 08:47:09.266841   42853 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:47:09.266846   42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:47:09.266903   42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:47:09.296050   42853 cri.go:89] found id: ""
	I1206 08:47:09.296064   42853 logs.go:282] 0 containers: []
	W1206 08:47:09.296071   42853 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:47:09.296077   42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:47:09.296136   42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:47:09.321390   42853 cri.go:89] found id: ""
	I1206 08:47:09.321403   42853 logs.go:282] 0 containers: []
	W1206 08:47:09.321410   42853 logs.go:284] No container was found matching "kindnet"
	I1206 08:47:09.321429   42853 logs.go:123] Gathering logs for kubelet ...
	I1206 08:47:09.321440   42853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:47:09.377851   42853 logs.go:123] Gathering logs for dmesg ...
	I1206 08:47:09.377868   42853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:47:09.388816   42853 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:47:09.388830   42853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:47:09.453833   42853 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:47:09.445536    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:09.446397    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:09.448143    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:09.448438    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:09.449920    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:47:09.445536    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:09.446397    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:09.448143    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:09.448438    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:09.449920    4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:47:09.453844   42853 logs.go:123] Gathering logs for containerd ...
	I1206 08:47:09.453854   42853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:47:09.495854   42853 logs.go:123] Gathering logs for container status ...
	I1206 08:47:09.495873   42853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 08:47:09.530262   42853 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001195219s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 08:47:09.530305   42853 out.go:285] * 
	W1206 08:47:09.530365   42853 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001195219s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 08:47:09.530374   42853 out.go:285] * 
	W1206 08:47:09.532520   42853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:47:09.537665   42853 out.go:203] 
	W1206 08:47:09.539878   42853 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001195219s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 08:47:09.539930   42853 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 08:47:09.539951   42853 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 08:47:09.543310   42853 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.041993520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042066947Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042175402Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042246475Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042319812Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042380497Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042438680Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042525219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042598901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042691840Z" level=info msg="Connect containerd service"
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.043054055Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.043778394Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.060430607Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.060537618Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.061094827Z" level=info msg="Start subscribing containerd event"
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.061161748Z" level=info msg="Start recovering state"
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104257792Z" level=info msg="Start event monitor"
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104326009Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104335543Z" level=info msg="Start streaming server"
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104344922Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104353792Z" level=info msg="runtime interface starting up..."
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104360930Z" level=info msg="starting plugins..."
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104373861Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 08:39:01 functional-090986 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.106555067Z" level=info msg="containerd successfully booted in 0.091825s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:47:10.554438    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:10.554872    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:10.556349    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:10.556677    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:47:10.558095    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> kernel <==
	 08:47:10 up 29 min,  0 user,  load average: 0.30, 0.55, 0.74
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 08:47:07 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:47:07 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 06 08:47:07 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:47:07 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:47:08 functional-090986 kubelet[4739]: E1206 08:47:08.025431    4739 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 06 08:47:08 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:47:08 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:47:08 functional-090986 kubelet[4744]: E1206 08:47:08.767550    4744 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:47:09 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 08:47:09 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:47:09 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:47:09 functional-090986 kubelet[4819]: E1206 08:47:09.533262    4819 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:47:09 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:47:09 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:47:10 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 08:47:10 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:47:10 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:47:10 functional-090986 kubelet[4858]: E1206 08:47:10.279803    4858 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:47:10 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:47:10 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 6 (356.980828ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 08:47:11.030285   48605 status.go:458] kubeconfig endpoint: get endpoint: "functional-090986" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 08:47:11.046349    4292 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-090986 --alsologtostderr -v=8
E1206 08:47:57.330911    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:48:25.034555    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:51:36.061690    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:52:57.331425    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:52:59.138342    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-090986 --alsologtostderr -v=8: exit status 80 (6m5.702764948s)

                                                
                                                
-- stdout --
	* [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:47:11.094911   48683 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:47:11.095050   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095060   48683 out.go:374] Setting ErrFile to fd 2...
	I1206 08:47:11.095065   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095329   48683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:47:11.095763   48683 out.go:368] Setting JSON to false
	I1206 08:47:11.096588   48683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1782,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:47:11.096668   48683 start.go:143] virtualization:  
	I1206 08:47:11.100026   48683 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:47:11.103775   48683 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:47:11.103977   48683 notify.go:221] Checking for updates...
	I1206 08:47:11.109719   48683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:47:11.112668   48683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:11.115549   48683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:47:11.118516   48683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:47:11.121495   48683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:47:11.124961   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:11.125074   48683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:47:11.149854   48683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:47:11.149988   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.212959   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.203697623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.213084   48683 docker.go:319] overlay module found
	I1206 08:47:11.216243   48683 out.go:179] * Using the docker driver based on existing profile
	I1206 08:47:11.219285   48683 start.go:309] selected driver: docker
	I1206 08:47:11.219311   48683 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.219451   48683 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:47:11.219560   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.284944   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.27604915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.285369   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:11.285438   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:11.285486   48683 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.289257   48683 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:47:11.292082   48683 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:47:11.295206   48683 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:47:11.298095   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:11.298152   48683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:47:11.298166   48683 cache.go:65] Caching tarball of preloaded images
	I1206 08:47:11.298170   48683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:47:11.298253   48683 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:47:11.298264   48683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:47:11.298374   48683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:47:11.317301   48683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:47:11.317323   48683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:47:11.317345   48683 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:47:11.317377   48683 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:47:11.317445   48683 start.go:364] duration metric: took 50.847µs to acquireMachinesLock for "functional-090986"
	I1206 08:47:11.317466   48683 start.go:96] Skipping create...Using existing machine configuration
	I1206 08:47:11.317471   48683 fix.go:54] fixHost starting: 
	I1206 08:47:11.317772   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:11.334567   48683 fix.go:112] recreateIfNeeded on functional-090986: state=Running err=<nil>
	W1206 08:47:11.334595   48683 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 08:47:11.337684   48683 out.go:252] * Updating the running docker "functional-090986" container ...
	I1206 08:47:11.337717   48683 machine.go:94] provisionDockerMachine start ...
	I1206 08:47:11.337795   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.354534   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.354869   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.354883   48683 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:47:11.507058   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.507088   48683 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:47:11.507161   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.525196   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.525520   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.525537   48683 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:47:11.684471   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.684556   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.702187   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.702515   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.702540   48683 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:47:11.859622   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:47:11.859650   48683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:47:11.859671   48683 ubuntu.go:190] setting up certificates
	I1206 08:47:11.859680   48683 provision.go:84] configureAuth start
	I1206 08:47:11.859747   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:11.877706   48683 provision.go:143] copyHostCerts
	I1206 08:47:11.877750   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877787   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:47:11.877800   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877873   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:47:11.877976   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.877997   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:47:11.878007   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.878035   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:47:11.878088   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878108   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:47:11.878114   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878140   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:47:11.878192   48683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:47:12.018564   48683 provision.go:177] copyRemoteCerts
	I1206 08:47:12.018632   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:47:12.018672   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.036577   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.143156   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 08:47:12.143226   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:47:12.160243   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 08:47:12.160303   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 08:47:12.177568   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 08:47:12.177628   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:47:12.194504   48683 provision.go:87] duration metric: took 334.802128ms to configureAuth
	I1206 08:47:12.194543   48683 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:47:12.194717   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:12.194725   48683 machine.go:97] duration metric: took 857.000255ms to provisionDockerMachine
	I1206 08:47:12.194732   48683 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:47:12.194743   48683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:47:12.194796   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:47:12.194842   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.212073   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.315270   48683 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:47:12.318678   48683 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 08:47:12.318701   48683 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 08:47:12.318706   48683 command_runner.go:130] > VERSION_ID="12"
	I1206 08:47:12.318711   48683 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 08:47:12.318717   48683 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 08:47:12.318720   48683 command_runner.go:130] > ID=debian
	I1206 08:47:12.318724   48683 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 08:47:12.318730   48683 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 08:47:12.318735   48683 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 08:47:12.318975   48683 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:47:12.319002   48683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:47:12.319013   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:47:12.319072   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:47:12.319161   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:47:12.319172   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /etc/ssl/certs/42922.pem
	I1206 08:47:12.319246   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:47:12.319253   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> /etc/test/nested/copy/4292/hosts
	I1206 08:47:12.319298   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:47:12.327031   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:12.344679   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:47:12.363077   48683 start.go:296] duration metric: took 168.329595ms for postStartSetup
	I1206 08:47:12.363152   48683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:47:12.363210   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.380353   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.487060   48683 command_runner.go:130] > 11%
	I1206 08:47:12.487699   48683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:47:12.493338   48683 command_runner.go:130] > 174G
	I1206 08:47:12.494716   48683 fix.go:56] duration metric: took 1.177238165s for fixHost
	I1206 08:47:12.494741   48683 start.go:83] releasing machines lock for "functional-090986", held for 1.177286419s
	I1206 08:47:12.494813   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:12.512960   48683 ssh_runner.go:195] Run: cat /version.json
	I1206 08:47:12.513022   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.513272   48683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:47:12.513331   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.541090   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.554766   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.647127   48683 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 08:47:12.647264   48683 ssh_runner.go:195] Run: systemctl --version
	I1206 08:47:12.750867   48683 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 08:47:12.751021   48683 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 08:47:12.751059   48683 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 08:47:12.751151   48683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 08:47:12.755609   48683 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 08:47:12.756103   48683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:47:12.756176   48683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:47:12.764393   48683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 08:47:12.764420   48683 start.go:496] detecting cgroup driver to use...
	I1206 08:47:12.764452   48683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:47:12.764507   48683 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:47:12.779951   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:47:12.793243   48683 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:47:12.793324   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:47:12.809005   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:47:12.823043   48683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:47:12.939696   48683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:47:13.060632   48683 docker.go:234] disabling docker service ...
	I1206 08:47:13.060721   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:47:13.078332   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:47:13.093719   48683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:47:13.229319   48683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:47:13.368814   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:47:13.381432   48683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:47:13.395011   48683 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 08:47:13.396419   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:47:13.405770   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:47:13.415310   48683 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:47:13.415505   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:47:13.424963   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.433399   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:47:13.442072   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.450816   48683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:47:13.458824   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:47:13.467776   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:47:13.477145   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:47:13.486457   48683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:47:13.493910   48683 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 08:47:13.494986   48683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:47:13.503356   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:13.622996   48683 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:47:13.753042   48683 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:47:13.753133   48683 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:47:13.757647   48683 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 08:47:13.757672   48683 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 08:47:13.757681   48683 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1206 08:47:13.757689   48683 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:13.757724   48683 command_runner.go:130] > Access: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757736   48683 command_runner.go:130] > Modify: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757742   48683 command_runner.go:130] > Change: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757746   48683 command_runner.go:130] >  Birth: -
	I1206 08:47:13.757803   48683 start.go:564] Will wait 60s for crictl version
	I1206 08:47:13.757883   48683 ssh_runner.go:195] Run: which crictl
	I1206 08:47:13.761846   48683 command_runner.go:130] > /usr/local/bin/crictl
	I1206 08:47:13.761974   48683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:47:13.786269   48683 command_runner.go:130] > Version:  0.1.0
	I1206 08:47:13.786289   48683 command_runner.go:130] > RuntimeName:  containerd
	I1206 08:47:13.786295   48683 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 08:47:13.786302   48683 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 08:47:13.788604   48683 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:47:13.788708   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.809864   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.811926   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.831700   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.839817   48683 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:47:13.842721   48683 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:47:13.858999   48683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:47:13.862710   48683 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 08:47:13.862939   48683 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:47:13.863057   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:13.863132   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.889556   48683 command_runner.go:130] > {
	I1206 08:47:13.889580   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.889586   48683 command_runner.go:130] >     {
	I1206 08:47:13.889601   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.889607   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889612   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.889616   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889619   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889628   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.889635   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889640   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.889652   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889657   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889661   48683 command_runner.go:130] >     },
	I1206 08:47:13.889664   48683 command_runner.go:130] >     {
	I1206 08:47:13.889672   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.889676   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889681   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.889687   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889691   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889707   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.889710   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889715   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.889725   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889729   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889733   48683 command_runner.go:130] >     },
	I1206 08:47:13.889736   48683 command_runner.go:130] >     {
	I1206 08:47:13.889743   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.889752   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889767   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.889770   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889777   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889785   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.889792   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889796   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.889800   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.889808   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889815   48683 command_runner.go:130] >     },
	I1206 08:47:13.889818   48683 command_runner.go:130] >     {
	I1206 08:47:13.889825   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.889829   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889837   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.889841   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889844   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889852   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.889863   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889867   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.889871   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889875   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889885   48683 command_runner.go:130] >       },
	I1206 08:47:13.889889   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889892   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889896   48683 command_runner.go:130] >     },
	I1206 08:47:13.889899   48683 command_runner.go:130] >     {
	I1206 08:47:13.889906   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.889912   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889918   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.889920   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889925   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889933   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.889937   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889945   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.889949   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889960   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889964   48683 command_runner.go:130] >       },
	I1206 08:47:13.889970   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889975   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889987   48683 command_runner.go:130] >     },
	I1206 08:47:13.890022   48683 command_runner.go:130] >     {
	I1206 08:47:13.890033   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.890037   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890043   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.890049   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890054   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890064   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.890070   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890075   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.890078   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890082   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890087   48683 command_runner.go:130] >       },
	I1206 08:47:13.890092   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890098   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890102   48683 command_runner.go:130] >     },
	I1206 08:47:13.890105   48683 command_runner.go:130] >     {
	I1206 08:47:13.890112   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.890115   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890121   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.890124   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890139   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.890145   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890149   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.890153   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890156   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890159   48683 command_runner.go:130] >     },
	I1206 08:47:13.890170   48683 command_runner.go:130] >     {
	I1206 08:47:13.890177   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.890181   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890187   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.890190   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890206   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.890215   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890223   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.890228   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890231   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890235   48683 command_runner.go:130] >       },
	I1206 08:47:13.890239   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890250   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890254   48683 command_runner.go:130] >     },
	I1206 08:47:13.890257   48683 command_runner.go:130] >     {
	I1206 08:47:13.890264   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.890272   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890277   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.890280   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890284   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890291   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.890294   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890298   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.890305   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890310   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.890315   48683 command_runner.go:130] >       },
	I1206 08:47:13.890319   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890331   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.890335   48683 command_runner.go:130] >     }
	I1206 08:47:13.890337   48683 command_runner.go:130] >   ]
	I1206 08:47:13.890340   48683 command_runner.go:130] > }
	I1206 08:47:13.892630   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.892653   48683 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:47:13.892734   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.915064   48683 command_runner.go:130] > {
	I1206 08:47:13.915085   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.915091   48683 command_runner.go:130] >     {
	I1206 08:47:13.915102   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.915109   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915115   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.915119   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915142   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.915149   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915153   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.915157   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915161   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915164   48683 command_runner.go:130] >     },
	I1206 08:47:13.915167   48683 command_runner.go:130] >     {
	I1206 08:47:13.915178   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.915184   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915189   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.915193   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915208   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.915214   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915218   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.915222   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915225   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915228   48683 command_runner.go:130] >     },
	I1206 08:47:13.915231   48683 command_runner.go:130] >     {
	I1206 08:47:13.915238   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.915245   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915251   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.915254   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915262   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915270   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.915275   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915279   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.915286   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.915291   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915295   48683 command_runner.go:130] >     },
	I1206 08:47:13.915298   48683 command_runner.go:130] >     {
	I1206 08:47:13.915305   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.915311   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915320   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.915324   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915328   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915338   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.915341   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915345   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.915349   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915352   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915359   48683 command_runner.go:130] >       },
	I1206 08:47:13.915363   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915410   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915414   48683 command_runner.go:130] >     },
	I1206 08:47:13.915418   48683 command_runner.go:130] >     {
	I1206 08:47:13.915424   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.915428   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915434   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.915437   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915441   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915448   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.915451   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915455   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.915458   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915471   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915474   48683 command_runner.go:130] >       },
	I1206 08:47:13.915478   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915481   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915484   48683 command_runner.go:130] >     },
	I1206 08:47:13.915487   48683 command_runner.go:130] >     {
	I1206 08:47:13.915494   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.915497   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915503   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.915506   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915509   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915523   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.915526   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915530   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.915534   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915540   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915543   48683 command_runner.go:130] >       },
	I1206 08:47:13.915547   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915550   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915553   48683 command_runner.go:130] >     },
	I1206 08:47:13.915556   48683 command_runner.go:130] >     {
	I1206 08:47:13.915563   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.915580   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915585   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.915588   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915592   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915601   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.915608   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915612   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.915616   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915620   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915622   48683 command_runner.go:130] >     },
	I1206 08:47:13.915626   48683 command_runner.go:130] >     {
	I1206 08:47:13.915635   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.915649   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915655   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.915658   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915662   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915670   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.915676   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915680   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.915684   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915687   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915691   48683 command_runner.go:130] >       },
	I1206 08:47:13.915699   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915706   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915710   48683 command_runner.go:130] >     },
	I1206 08:47:13.915713   48683 command_runner.go:130] >     {
	I1206 08:47:13.915720   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.915723   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915728   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.915731   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915735   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915746   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.915752   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915756   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.915760   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915764   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.915777   48683 command_runner.go:130] >       },
	I1206 08:47:13.915781   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915785   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.915790   48683 command_runner.go:130] >     }
	I1206 08:47:13.915793   48683 command_runner.go:130] >   ]
	I1206 08:47:13.915796   48683 command_runner.go:130] > }
	I1206 08:47:13.917976   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.917998   48683 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:47:13.918006   48683 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:47:13.918108   48683 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:47:13.918181   48683 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:47:13.946472   48683 command_runner.go:130] > {
	I1206 08:47:13.946489   48683 command_runner.go:130] >   "cniconfig": {
	I1206 08:47:13.946494   48683 command_runner.go:130] >     "Networks": [
	I1206 08:47:13.946497   48683 command_runner.go:130] >       {
	I1206 08:47:13.946502   48683 command_runner.go:130] >         "Config": {
	I1206 08:47:13.946507   48683 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 08:47:13.946512   48683 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 08:47:13.946516   48683 command_runner.go:130] >           "Plugins": [
	I1206 08:47:13.946520   48683 command_runner.go:130] >             {
	I1206 08:47:13.946524   48683 command_runner.go:130] >               "Network": {
	I1206 08:47:13.946529   48683 command_runner.go:130] >                 "ipam": {},
	I1206 08:47:13.946537   48683 command_runner.go:130] >                 "type": "loopback"
	I1206 08:47:13.946541   48683 command_runner.go:130] >               },
	I1206 08:47:13.946554   48683 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 08:47:13.946558   48683 command_runner.go:130] >             }
	I1206 08:47:13.946561   48683 command_runner.go:130] >           ],
	I1206 08:47:13.946573   48683 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 08:47:13.946581   48683 command_runner.go:130] >         },
	I1206 08:47:13.946586   48683 command_runner.go:130] >         "IFName": "lo"
	I1206 08:47:13.946590   48683 command_runner.go:130] >       }
	I1206 08:47:13.946593   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946597   48683 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 08:47:13.946601   48683 command_runner.go:130] >     "PluginDirs": [
	I1206 08:47:13.946605   48683 command_runner.go:130] >       "/opt/cni/bin"
	I1206 08:47:13.946609   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946613   48683 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 08:47:13.946617   48683 command_runner.go:130] >     "Prefix": "eth"
	I1206 08:47:13.946620   48683 command_runner.go:130] >   },
	I1206 08:47:13.946623   48683 command_runner.go:130] >   "config": {
	I1206 08:47:13.946627   48683 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 08:47:13.946630   48683 command_runner.go:130] >       "/etc/cdi",
	I1206 08:47:13.946636   48683 command_runner.go:130] >       "/var/run/cdi"
	I1206 08:47:13.946640   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946643   48683 command_runner.go:130] >     "cni": {
	I1206 08:47:13.946646   48683 command_runner.go:130] >       "binDir": "",
	I1206 08:47:13.946650   48683 command_runner.go:130] >       "binDirs": [
	I1206 08:47:13.946653   48683 command_runner.go:130] >         "/opt/cni/bin"
	I1206 08:47:13.946656   48683 command_runner.go:130] >       ],
	I1206 08:47:13.946661   48683 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 08:47:13.946665   48683 command_runner.go:130] >       "confTemplate": "",
	I1206 08:47:13.946668   48683 command_runner.go:130] >       "ipPref": "",
	I1206 08:47:13.946672   48683 command_runner.go:130] >       "maxConfNum": 1,
	I1206 08:47:13.946676   48683 command_runner.go:130] >       "setupSerially": false,
	I1206 08:47:13.946680   48683 command_runner.go:130] >       "useInternalLoopback": false
	I1206 08:47:13.946683   48683 command_runner.go:130] >     },
	I1206 08:47:13.946688   48683 command_runner.go:130] >     "containerd": {
	I1206 08:47:13.946696   48683 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 08:47:13.946701   48683 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 08:47:13.946706   48683 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 08:47:13.946710   48683 command_runner.go:130] >       "runtimes": {
	I1206 08:47:13.946713   48683 command_runner.go:130] >         "runc": {
	I1206 08:47:13.946718   48683 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 08:47:13.946722   48683 command_runner.go:130] >           "PodAnnotations": null,
	I1206 08:47:13.946728   48683 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 08:47:13.946733   48683 command_runner.go:130] >           "cgroupWritable": false,
	I1206 08:47:13.946738   48683 command_runner.go:130] >           "cniConfDir": "",
	I1206 08:47:13.946742   48683 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 08:47:13.946745   48683 command_runner.go:130] >           "io_type": "",
	I1206 08:47:13.946748   48683 command_runner.go:130] >           "options": {
	I1206 08:47:13.946752   48683 command_runner.go:130] >             "BinaryName": "",
	I1206 08:47:13.946756   48683 command_runner.go:130] >             "CriuImagePath": "",
	I1206 08:47:13.946761   48683 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 08:47:13.946764   48683 command_runner.go:130] >             "IoGid": 0,
	I1206 08:47:13.946768   48683 command_runner.go:130] >             "IoUid": 0,
	I1206 08:47:13.946772   48683 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 08:47:13.946776   48683 command_runner.go:130] >             "Root": "",
	I1206 08:47:13.946780   48683 command_runner.go:130] >             "ShimCgroup": "",
	I1206 08:47:13.946784   48683 command_runner.go:130] >             "SystemdCgroup": false
	I1206 08:47:13.946787   48683 command_runner.go:130] >           },
	I1206 08:47:13.946793   48683 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 08:47:13.946799   48683 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 08:47:13.946803   48683 command_runner.go:130] >           "runtimePath": "",
	I1206 08:47:13.946808   48683 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 08:47:13.946812   48683 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 08:47:13.946816   48683 command_runner.go:130] >           "snapshotter": ""
	I1206 08:47:13.946820   48683 command_runner.go:130] >         }
	I1206 08:47:13.946823   48683 command_runner.go:130] >       }
	I1206 08:47:13.946826   48683 command_runner.go:130] >     },
	I1206 08:47:13.946836   48683 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 08:47:13.946848   48683 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 08:47:13.946854   48683 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 08:47:13.946858   48683 command_runner.go:130] >     "disableApparmor": false,
	I1206 08:47:13.946863   48683 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 08:47:13.946867   48683 command_runner.go:130] >     "disableProcMount": false,
	I1206 08:47:13.946871   48683 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 08:47:13.946874   48683 command_runner.go:130] >     "enableCDI": true,
	I1206 08:47:13.946878   48683 command_runner.go:130] >     "enableSelinux": false,
	I1206 08:47:13.946883   48683 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 08:47:13.946887   48683 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 08:47:13.946891   48683 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 08:47:13.946896   48683 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 08:47:13.946900   48683 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 08:47:13.946905   48683 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 08:47:13.946909   48683 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 08:47:13.946917   48683 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946922   48683 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 08:47:13.946928   48683 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946932   48683 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 08:47:13.946937   48683 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 08:47:13.946940   48683 command_runner.go:130] >   },
	I1206 08:47:13.946943   48683 command_runner.go:130] >   "features": {
	I1206 08:47:13.946948   48683 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 08:47:13.946951   48683 command_runner.go:130] >   },
	I1206 08:47:13.946955   48683 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 08:47:13.946964   48683 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946974   48683 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946977   48683 command_runner.go:130] >   "runtimeHandlers": [
	I1206 08:47:13.946980   48683 command_runner.go:130] >     {
	I1206 08:47:13.946984   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.946988   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.946992   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.946996   48683 command_runner.go:130] >       }
	I1206 08:47:13.947002   48683 command_runner.go:130] >     },
	I1206 08:47:13.947006   48683 command_runner.go:130] >     {
	I1206 08:47:13.947009   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.947015   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.947019   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.947022   48683 command_runner.go:130] >       },
	I1206 08:47:13.947026   48683 command_runner.go:130] >       "name": "runc"
	I1206 08:47:13.947029   48683 command_runner.go:130] >     }
	I1206 08:47:13.947032   48683 command_runner.go:130] >   ],
	I1206 08:47:13.947035   48683 command_runner.go:130] >   "status": {
	I1206 08:47:13.947039   48683 command_runner.go:130] >     "conditions": [
	I1206 08:47:13.947042   48683 command_runner.go:130] >       {
	I1206 08:47:13.947046   48683 command_runner.go:130] >         "message": "",
	I1206 08:47:13.947050   48683 command_runner.go:130] >         "reason": "",
	I1206 08:47:13.947053   48683 command_runner.go:130] >         "status": true,
	I1206 08:47:13.947059   48683 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 08:47:13.947062   48683 command_runner.go:130] >       },
	I1206 08:47:13.947065   48683 command_runner.go:130] >       {
	I1206 08:47:13.947072   48683 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 08:47:13.947081   48683 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 08:47:13.947085   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947089   48683 command_runner.go:130] >         "type": "NetworkReady"
	I1206 08:47:13.947091   48683 command_runner.go:130] >       },
	I1206 08:47:13.947094   48683 command_runner.go:130] >       {
	I1206 08:47:13.947118   48683 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 08:47:13.947123   48683 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 08:47:13.947129   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947134   48683 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 08:47:13.947137   48683 command_runner.go:130] >       }
	I1206 08:47:13.947139   48683 command_runner.go:130] >     ]
	I1206 08:47:13.947142   48683 command_runner.go:130] >   }
	I1206 08:47:13.947144   48683 command_runner.go:130] > }
	I1206 08:47:13.947502   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:13.947519   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:13.947541   48683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:47:13.947564   48683 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:47:13.947673   48683 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:47:13.947742   48683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:47:13.955523   48683 command_runner.go:130] > kubeadm
	I1206 08:47:13.955542   48683 command_runner.go:130] > kubectl
	I1206 08:47:13.955546   48683 command_runner.go:130] > kubelet
	I1206 08:47:13.955560   48683 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:47:13.955622   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:47:13.963242   48683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:47:13.976514   48683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:47:13.994365   48683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 08:47:14.008131   48683 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:47:14.012074   48683 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 08:47:14.012170   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:14.162349   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:14.970935   48683 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:47:14.971004   48683 certs.go:195] generating shared ca certs ...
	I1206 08:47:14.971035   48683 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:14.971212   48683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:47:14.971308   48683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:47:14.971340   48683 certs.go:257] generating profile certs ...
	I1206 08:47:14.971529   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:47:14.971755   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:47:14.971844   48683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:47:14.971869   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 08:47:14.971914   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 08:47:14.971945   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 08:47:14.971989   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 08:47:14.972021   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 08:47:14.972053   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 08:47:14.972085   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 08:47:14.972115   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 08:47:14.972198   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:47:14.972259   48683 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:47:14.972284   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:47:14.972342   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:47:14.972394   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:47:14.972452   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:47:14.972528   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:14.972579   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:14.972619   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem -> /usr/share/ca-certificates/4292.pem
	I1206 08:47:14.972659   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /usr/share/ca-certificates/42922.pem
	I1206 08:47:14.973224   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:47:14.995297   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:47:15.042161   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:47:15.062885   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:47:15.082018   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:47:15.101436   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:47:15.120061   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:47:15.140257   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:47:15.160107   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:47:15.178980   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:47:15.197893   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:47:15.216224   48683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:47:15.229330   48683 ssh_runner.go:195] Run: openssl version
	I1206 08:47:15.235331   48683 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 08:47:15.235817   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.243429   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:47:15.250764   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254643   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254673   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254723   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.295906   48683 command_runner.go:130] > b5213941
	I1206 08:47:15.295990   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:47:15.303441   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.310784   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:47:15.318504   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322051   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322380   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322461   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.363237   48683 command_runner.go:130] > 51391683
	I1206 08:47:15.363703   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:47:15.371299   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.378918   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:47:15.386367   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390281   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390354   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390410   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.431004   48683 command_runner.go:130] > 3ec20f2e
	I1206 08:47:15.431441   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:47:15.439072   48683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442819   48683 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442856   48683 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 08:47:15.442863   48683 command_runner.go:130] > Device: 259,1	Inode: 1055659     Links: 1
	I1206 08:47:15.442870   48683 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:15.442877   48683 command_runner.go:130] > Access: 2025-12-06 08:43:07.824678266 +0000
	I1206 08:47:15.442882   48683 command_runner.go:130] > Modify: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442890   48683 command_runner.go:130] > Change: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442895   48683 command_runner.go:130] >  Birth: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442956   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 08:47:15.483144   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.483601   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 08:47:15.524376   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.524527   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 08:47:15.567333   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.567897   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 08:47:15.609722   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.610195   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 08:47:15.652939   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.653458   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 08:47:15.694815   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.695278   48683 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:15.695370   48683 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:47:15.695465   48683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:47:15.724990   48683 cri.go:89] found id: ""
	I1206 08:47:15.725064   48683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:47:15.732181   48683 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 08:47:15.732210   48683 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 08:47:15.732217   48683 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 08:47:15.733102   48683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 08:47:15.733116   48683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 08:47:15.733169   48683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 08:47:15.740768   48683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:47:15.741168   48683 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-090986" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.741273   48683 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "functional-090986" cluster setting kubeconfig missing "functional-090986" context setting]
	I1206 08:47:15.741558   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.741975   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.742128   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.742650   48683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 08:47:15.742669   48683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 08:47:15.742675   48683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 08:47:15.742680   48683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 08:47:15.742685   48683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 08:47:15.742976   48683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 08:47:15.743070   48683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 08:47:15.750828   48683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 08:47:15.750861   48683 kubeadm.go:602] duration metric: took 17.739612ms to restartPrimaryControlPlane
	I1206 08:47:15.750871   48683 kubeadm.go:403] duration metric: took 55.600148ms to StartCluster
	I1206 08:47:15.750890   48683 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.750966   48683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.751639   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.751842   48683 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 08:47:15.752180   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:15.752232   48683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 08:47:15.752302   48683 addons.go:70] Setting storage-provisioner=true in profile "functional-090986"
	I1206 08:47:15.752319   48683 addons.go:239] Setting addon storage-provisioner=true in "functional-090986"
	I1206 08:47:15.752322   48683 addons.go:70] Setting default-storageclass=true in profile "functional-090986"
	I1206 08:47:15.752340   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.752341   48683 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-090986"
	I1206 08:47:15.752637   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.752784   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.759188   48683 out.go:179] * Verifying Kubernetes components...
	I1206 08:47:15.762058   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:15.783651   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.783826   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.785192   48683 addons.go:239] Setting addon default-storageclass=true in "functional-090986"
	I1206 08:47:15.785238   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.785700   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.797451   48683 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 08:47:15.800625   48683 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:15.800648   48683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 08:47:15.800725   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.810048   48683 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:15.810080   48683 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 08:47:15.810147   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.824818   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.853374   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.963935   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:15.994167   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.016409   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:16.722308   48683 node_ready.go:35] waiting up to 6m0s for node "functional-090986" to be "Ready" ...
	I1206 08:47:16.722441   48683 type.go:168] "Request Body" body=""
	I1206 08:47:16.722509   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:16.722791   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:16.722979   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722997   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723021   48683 retry.go:31] will retry after 246.599259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722932   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723088   48683 retry.go:31] will retry after 155.728524ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.879530   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.938491   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.942697   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.942739   48683 retry.go:31] will retry after 198.095926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.969843   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.032387   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.037081   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.037167   48683 retry.go:31] will retry after 340.655262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.141488   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:17.200483   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.200581   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.200607   48683 retry.go:31] will retry after 823.921965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:17.378343   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.437909   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.437949   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.437997   48683 retry.go:31] will retry after 597.373907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.723431   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.723506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.723862   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.025532   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:18.036222   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:18.102548   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.106195   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.106289   48683 retry.go:31] will retry after 988.595122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128444   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.128537   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128579   48683 retry.go:31] will retry after 1.22957213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.222734   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.222810   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.223190   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.722737   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.722827   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.723191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:18.723277   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:19.095767   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:19.151460   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.155168   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.155201   48683 retry.go:31] will retry after 1.717558752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.223503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.223595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.223937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:19.358372   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:19.411770   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.415269   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.415303   48683 retry.go:31] will retry after 781.287082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.722942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.197734   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:20.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.223196   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.223547   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.262283   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.262363   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.262407   48683 retry.go:31] will retry after 1.829414459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.723284   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:20.723338   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:20.873661   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:20.932799   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.936985   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.937020   48683 retry.go:31] will retry after 2.554499586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:21.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.223934   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:21.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.722674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.723048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.092657   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:22.149785   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:22.153326   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.153368   48683 retry.go:31] will retry after 2.084938041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.222743   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.722901   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.722987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.723330   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:22.723402   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:23.223196   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.223285   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.223660   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:23.492173   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:23.557652   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:23.557715   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.557741   48683 retry.go:31] will retry after 4.19827742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.723091   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.723482   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.223263   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.223339   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.223623   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.238906   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:24.307275   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:24.307320   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.307339   48683 retry.go:31] will retry after 4.494270685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.722877   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.723244   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:25.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.223006   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.223365   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:25.223455   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:25.723213   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.723596   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.223491   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.223588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.722621   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.722699   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.222525   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.222628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.222892   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:27.723035   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:27.756528   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:27.814954   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:27.818792   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:27.818824   48683 retry.go:31] will retry after 5.399057422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.223490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.223811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.723414   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.723485   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.723794   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.802108   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:28.864913   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:28.864953   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.864972   48683 retry.go:31] will retry after 3.285056528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:29.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.223556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.223857   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:29.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.723030   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:29.723087   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:30.222650   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.222720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.223035   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:30.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.222982   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.223061   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.723202   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.723273   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.723614   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:31.723661   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:32.150291   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:32.207920   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:32.211781   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.211813   48683 retry.go:31] will retry after 10.805243336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.223541   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:32.723329   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.723744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.218182   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:33.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.222677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.295753   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:33.295946   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.295967   48683 retry.go:31] will retry after 9.227502372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:33.723973   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:34.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.222681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.223037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:34.723424   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.723499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.723811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.222963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.722678   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.723029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:36.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.223195   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.223476   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:36.223516   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:36.723305   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.723388   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.723674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.223484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.223557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.223866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.723315   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.723395   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.723693   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:38.223481   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.223887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:38.223937   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:38.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.723024   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.223350   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.223435   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.223711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.723507   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.723587   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.723926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.222602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.723573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.723901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:40.723952   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:41.222532   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.222606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:41.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.723083   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.222810   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.222891   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.223201   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.523700   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:42.586651   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:42.586695   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.586713   48683 retry.go:31] will retry after 12.2898811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.723024   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.723100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.723445   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:43.017838   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:43.079371   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:43.079435   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.079458   48683 retry.go:31] will retry after 19.494910144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.222603   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:43.223199   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:43.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.722697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.722959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.222614   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.722637   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.723067   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:45.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:45.223228   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:45.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.722969   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.223003   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.223089   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.223469   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.723273   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.723345   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.723681   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:47.223099   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.223167   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.223496   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:47.223542   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:47.723308   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.723392   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.223454   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.223802   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.722998   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.722484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.722561   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.722823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:49.722870   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:50.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.222659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:50.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.723086   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.222858   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.222936   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.723311   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.723634   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:51.723682   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:52.223453   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.223522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.223869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:52.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.722897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.222985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.722770   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.723108   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.222905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:54.222955   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:54.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.722660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.723065   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.877464   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:54.933804   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:54.937955   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:54.937987   48683 retry.go:31] will retry after 17.91075527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:55.223442   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.223852   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:55.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.722606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:56.222999   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.223070   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:56.223487   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:56.723218   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.723287   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.723646   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.223125   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.223203   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.223494   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.722995   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.723069   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.723443   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:58.223117   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.223189   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.223566   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:58.223620   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:58.723372   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.723711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.223465   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.223912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.722939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:00.247414   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.247503   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.247882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:00.247935   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:00.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.722938   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.222887   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.222999   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.223358   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.723162   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.723235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.723597   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.223493   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.223823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.575367   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:02.637904   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:02.637958   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.637977   48683 retry.go:31] will retry after 12.943468008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.723120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.723231   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.723512   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:02.723552   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:03.223325   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.223416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.223738   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:03.723412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.723492   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.222479   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.222557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.222823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.722559   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:05.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.222783   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:05.223222   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.722946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.223159   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.223264   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.223665   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.723461   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.723536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.723855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.222524   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.222592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.722594   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.722670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.723027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:07.723084   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:08.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.222686   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.223036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:08.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.222614   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:10.223441   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.223507   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.223798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:10.223853   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:10.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.723077   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.222928   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.223022   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.223407   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.723308   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.723611   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.223404   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.223497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:12.223876   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:12.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.849275   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:12.904952   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:12.908634   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:12.908667   48683 retry.go:31] will retry after 25.236445918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:13.223053   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.223119   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.223405   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:13.723248   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.723328   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.723664   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:14.223478   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.223874   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:14.223925   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:14.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.222959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.582577   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:15.646326   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:15.649856   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.649887   48683 retry.go:31] will retry after 20.09954841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.723221   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.723656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.222526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.222836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.723594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.723935   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:16.723996   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:17.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.222939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:17.722495   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.722573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.722891   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.722663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:19.222693   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.222763   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:19.223069   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:19.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.223016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.723476   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.723548   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.723815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:21.222793   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.222863   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.223194   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:21.223251   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:21.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.722963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.222954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.222759   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.722801   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.722870   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.723132   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:23.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:24.222563   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:24.722536   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.722956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.223202   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.223267   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.723347   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.723448   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:25.723881   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:26.222857   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.222930   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.223262   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:26.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.722570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.222528   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.222598   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.222916   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.722616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.722695   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.723043   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:28.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.222622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.222922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:28.222981   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:28.722606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.723095   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.722674   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.722742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:30.222789   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.222864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.223189   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:30.223256   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:30.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.223492   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.223567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.222773   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.223092   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.722517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.722582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.722842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:32.722882   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:33.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:33.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.722591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.223311   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.223394   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.223656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.723571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:34.723969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:35.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.222962   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.722532   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.722854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.750369   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:35.818338   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818385   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818494   48683 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:36.223177   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.223245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.223588   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:36.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.723369   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:37.223358   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.223441   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.223795   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:37.223851   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:37.723459   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.723923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.145414   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:38.206093   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210075   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210171   48683 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:38.213345   48683 out.go:179] * Enabled addons: 
	I1206 08:48:38.217127   48683 addons.go:530] duration metric: took 1m22.464883403s for enable addons: enabled=[]
	I1206 08:48:38.223238   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.223680   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.723466   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.723871   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.222501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.722607   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.723013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:39.723066   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:40.222676   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.222756   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:40.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.722649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.223120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.223622   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.723403   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.723475   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:41.723873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:42.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:42.722684   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.723129   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.222817   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.222915   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.223184   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:44.222598   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.223013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:44.223067   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:44.722714   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.222844   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.222932   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.223348   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.723174   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.723260   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.723605   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.222507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.222918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.722952   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:46.723007   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:47.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:47.722496   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.722563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.722826   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.222616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.722784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.723121   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:48.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:49.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.222915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:49.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.722727   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.723082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.222645   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.222761   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.223073   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.722569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:51.222952   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.223025   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.223425   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:51.223480   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:51.723105   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.723185   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.723538   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.223323   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.223409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.223689   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.722451   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.222606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.223017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.722735   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.722801   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.723122   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:53.723177   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:54.222846   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.222924   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.223260   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:54.722973   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.723056   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.723447   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.223281   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.223354   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.223701   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.723485   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.723577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.723911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:55.723962   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:56.222980   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.223059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.223408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:56.723182   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.723251   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.723637   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.223421   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.223498   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.722566   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:58.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.222866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:58.222905   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:58.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.722681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.723002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.222616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.222687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.722925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:00.222639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.222712   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:00.223060   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:00.722635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.723063   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.223028   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.223234   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.223616   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.723323   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.723798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:02.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.223571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:02.223997   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:02.722537   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.722919   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.222635   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.222942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.722533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.222483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.222897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.722446   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.722517   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:04.722879   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:05.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.222673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:05.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.723669   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.223552   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.223906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.722590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:06.722967   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:07.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.222610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.222868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:07.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.723006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.222578   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.222672   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.722492   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.722560   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:09.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:09.223046   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:09.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.222600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:11.222977   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.223048   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.224357   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1206 08:49:11.224412   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:11.722526   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.722867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.222693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.223079   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.722676   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.722753   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.723090   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.222608   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.722719   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.723062   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:13.723117   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:14.222784   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.222858   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.722588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.722847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.222870   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.222963   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.722751   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.722830   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:15.723220   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:16.223389   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.223501   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.223841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:16.723482   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.723936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.222580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.722456   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.722830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:18.222500   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.222575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:18.222970   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:18.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.722957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.223415   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.223481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.223744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.723518   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.723592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.723932   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:20.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.222604   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:20.223052   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:20.723464   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.723877   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.222834   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.222916   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.223278   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:22.222535   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:22.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:22.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.723051   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.222710   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:24.222593   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.222679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.223059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:24.223115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:24.722807   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.722887   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.723288   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.223044   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.223114   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.223419   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.723206   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.723645   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.222468   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.222888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.722538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.722868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:26.722924   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:27.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.222966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:27.722668   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.722745   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.723116   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.222806   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.222880   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.223155   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.723088   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:28.723155   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:29.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.222755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:29.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.722634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.722895   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.222645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.222996   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.722682   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.722768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.723166   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:30.723221   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:31.223009   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.223094   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.223410   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:31.723177   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.723629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.223466   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.722617   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.722984   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:33.222572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.222977   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:33.223031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:33.722723   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.722796   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.723147   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.222719   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.222791   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.223074   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.722746   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.722818   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.723175   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:35.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.222977   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.223336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:35.223421   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:35.723153   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.723223   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.723599   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.223510   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.223602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.223964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.222842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.722645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:37.723047   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:38.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.223119   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:38.722610   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.222653   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.223084   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.722817   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.723225   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:39.723274   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:40.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.223023   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:40.722742   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.722820   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.723169   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.222970   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.223060   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.723194   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.723270   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.723557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:41.723610   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:42.223426   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.223508   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.223855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:42.722579   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.722655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.723008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.222519   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.222591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.222864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.722618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.722917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:44.222612   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.223025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:44.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:44.722483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.722565   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:46.223140   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.223214   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:46.223591   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:46.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.723406   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.723743   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.222537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.722771   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.722869   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.723227   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:48.723289   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:49.222922   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.223256   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:49.723128   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.723204   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.723574   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.223491   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.223824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.722515   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.722856   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:51.223538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.223610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.223931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:51.223984   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:51.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.722574   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.222457   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.222799   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.222986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.723440   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.723514   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.723868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:53.723922   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:54.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.222982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:54.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.723007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.222545   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.222641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.222936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.723009   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:56.223162   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.223235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.223592   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:56.223647   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:56.723346   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.723440   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.223563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.224002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.722697   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.723097   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.222803   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.222876   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:58.723019   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:59.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:59.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.723878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.222804   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.223133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.722691   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.723059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:00.723115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:01.222895   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.223286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:01.722600   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.723014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.722815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:03.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.222573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.222946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:03.222994   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:03.722541   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.222608   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.222676   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.722602   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:05.222818   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.222895   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.223192   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:05.223237   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:05.722878   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.722947   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.723266   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.223357   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.223444   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.223770   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.722470   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.722567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.722904   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.222961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.722668   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.723032   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:07.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:08.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:08.722770   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.722843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.723145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.222860   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.722567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:10.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.222734   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.223056   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:10.223102   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:10.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.722688   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.723026   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.222874   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.222955   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.223305   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.723062   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.723127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.723408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:12.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.223261   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.223620   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:12.223677   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:12.723275   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.723355   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.223538   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.223808   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.722888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.222590   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.222669   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.722580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.722918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:14.722969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:15.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:15.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.223255   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.223535   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.723326   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.723416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.723757   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:16.723819   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:17.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.222577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.222914   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:17.722474   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.722547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.722850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.222583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.722776   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.723111   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:19.222505   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.222594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.222859   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:19.222907   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:19.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.222684   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.223168   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.722431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.722497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.722767   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:21.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:21.223132   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:21.722822   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.723237   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.222916   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.222997   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.223321   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.723124   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.723201   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.723551   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:23.223344   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.223446   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.223810   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:23.223863   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:23.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.222565   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.722582   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.723045   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.222675   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.222956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.722490   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.722558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.722858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:25.722902   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:26.222992   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.223066   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:26.723227   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.723619   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.223499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:27.723024   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:28.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.222853   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:28.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.722950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.722755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.723172   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:29.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:30.222914   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.222992   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:30.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.722926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.222879   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.223214   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:32.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:32.222979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:32.722487   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.722557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.722887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:34.723033   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:35.222710   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.223174   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:35.722627   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.223126   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.223207   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.223553   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.723209   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.723639   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:36.723696   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:37.223303   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.223402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.223672   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:37.723463   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.723537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.723869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.222903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.723171   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.723241   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.723601   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:39.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.223483   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.223848   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:39.223901   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:39.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.222657   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.722746   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.723061   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.222889   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.222968   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.223319   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.722996   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.723258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:41.723307   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:42.223105   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.223674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:42.723351   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.723771   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.222466   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.222542   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.222830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.723509   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.723588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.723950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:43.724004   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:44.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.222958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:44.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.722910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.222798   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.223002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.223897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:46.224920   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.224987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.225286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:46.225327   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:46.723066   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.723140   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.723458   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.223236   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.223694   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.723477   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.723544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.723809   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.222493   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:48.723048   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:49.222689   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:49.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.222657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.223048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.722578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:51.222958   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.223044   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.223428   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:51.223484   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:51.723238   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.723326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.723667   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.223431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.223506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.223847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.723473   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.723546   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.723905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.222927   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.723407   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.723477   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.723780   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:53.723831   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:54.223258   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.223334   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.223684   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:54.723481   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.723559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.723887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.222605   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.222908   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:56.223029   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.223100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.223448   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:56.223504   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:56.723288   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.723362   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.723641   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.223504   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.223865   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.722469   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.722544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.722884   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.222923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.722693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.723034   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:58.723089   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:59.222617   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.223050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:59.722758   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.722838   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.222649   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.222741   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.722924   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.723002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.723336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:00.723407   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:01.223151   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.223227   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.223550   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:01.723316   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.723407   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.723750   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.222569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.722882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:03.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:03.223074   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:03.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.222840   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.722628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.722970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:05.222698   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.222780   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.223093   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:05.223142   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:05.722471   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.722549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.722864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.223041   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.223120   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.223579   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.723396   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.723824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.222893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.722605   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.722673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.723011   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:07.723085   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:08.222754   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.222842   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.223191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:08.722662   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.722736   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.723038   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.222745   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.223142   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.722861   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.723235   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:09.723279   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:10.222634   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:10.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.722638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.722937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.223528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.223600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.723112   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.723177   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.723461   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:11.723503   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:12.223246   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.223682   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:12.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.723593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.723946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.222536   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.722958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:14.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:14.223043   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:14.722452   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.722845   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.222537   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.222613   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.722573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.723240   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:16.222683   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.222764   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:16.223086   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:16.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.723021   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.722565   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.722636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:18.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:19.222506   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.222917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:19.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.222654   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.222725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.223047   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.722782   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.723050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:20.723099   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:21.223085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.223561   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:21.723342   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.723426   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.723759   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.223473   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.722720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.723089   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:22.723144   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:23.222822   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.222899   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.223255   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:23.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.722930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.722672   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.723136   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:24.723192   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:25.222510   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:25.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.223082   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.223153   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.223523   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.723172   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.723245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.723542   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:26.723585   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:27.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.223474   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.223854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:27.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.722624   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.223564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.722696   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.723057   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:29.222647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.223145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:29.223197   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:29.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.222740   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:31.222877   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.223216   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:31.223257   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:31.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.722986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.222702   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.222778   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.222731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.722763   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.722837   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.723186   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:33.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:34.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:34.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.222597   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.223031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.722870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:36.223103   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.223184   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.223557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:36.223614   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:36.723236   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.723314   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.723677   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.223456   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.223536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.223814   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.722521   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.222667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.222743   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.722867   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.722943   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.723253   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:38.723310   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:39.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:39.722686   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.723127   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.222805   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.222893   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.223247   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:41.223068   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.223147   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.223511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:41.223567   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:41.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.723402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.723663   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.223489   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.223566   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.223933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.723031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.222740   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.222816   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.223098   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.722622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:43.723044   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:44.222550   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:44.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.222681   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.222768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.223254   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.723085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.723156   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.723536   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:45.723592   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:46.223392   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.223709   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:46.722472   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.722550   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.722572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:48.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.222994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:48.223050   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:48.722729   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.722814   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.723224   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.222841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.222560   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.222640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.722647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.723039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:50.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:51.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.222961   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:51.723095   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.723527   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.223293   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.223365   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.223638   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.723480   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.723556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.723872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:52.723957   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:53.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:53.722667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.722737   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.222637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:55.223524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.223593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.223922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:55.223979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:55.722631   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.722706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.723040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.223219   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.223289   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.223644   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.723321   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.723712   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.223501   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.223578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.223899   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.722570   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.722944   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:57.722991   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:58.222513   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.222843   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:58.722524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.722929   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.722692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.723017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:59.723091   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:00.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:00.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.722961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.222893   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.222975   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.223252   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:02.222554   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.222634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.222965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:02.223025   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:02.722664   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.722731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.222669   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.722717   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.222582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.222867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:04.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:05.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.222660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.223001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:05.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.722529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.722796   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.223039   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.223118   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.223488   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.723240   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.723313   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.723661   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:06.723717   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:07.223477   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.223559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.223842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:07.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.223018   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.722509   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:09.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.222633   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:09.223037   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:09.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.723150   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.222456   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.222522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:11.222767   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.222843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:11.223242   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:11.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.222662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.722612   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.722687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.723066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.222774   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.222844   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.722797   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.722874   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.723220   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:13.723278   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:14.222938   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.223011   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.223370   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:14.723151   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.723218   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.723511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.223282   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.223353   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.223716   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.723508   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.723596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.723933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:15.723988   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:16.223075   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.223148   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.223467   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:16.723393   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.723870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.222594   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.222670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.222997   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.722611   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:18.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.222665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:18.223068   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:18.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.222898   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.722641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.722493   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.722564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.722881   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:20.722931   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:21.222827   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.222898   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.223270   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:21.722982   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.723059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.723422   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.223208   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.223282   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.223570   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.723366   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.723481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.723885   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:22.723946   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:23.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:23.722590   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.722671   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.723025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:25.223036   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:25.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.722630   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.722980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.223051   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.223127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.223495   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.723130   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.723210   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.723481   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:27.223253   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.223710   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:27.223764   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:27.723405   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.723490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.723850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.722973   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.222678   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.222749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.722594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.722851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:29.722889   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:30.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.222697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.223066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:30.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.222962   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.223058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.223464   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.723093   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.723169   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.723529   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:31.723588   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:32.223231   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.223306   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.223675   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:32.723451   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.723529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.723844   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.722693   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.722771   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.723118   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:34.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.222571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.222833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:34.222873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:34.722546   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.723016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.222749   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.223165   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.722851   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.722928   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.723193   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:36.223352   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.223828   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:36.223884   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:36.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.222713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.223007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.222795   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.223113   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.722765   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.722845   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.723210   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:38.723261   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:39.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:39.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.722951   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.222911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.722636   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.722713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:41.222845   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.222923   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.223258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:41.223312   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:41.722994   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.723058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.723414   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.223248   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.223346   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.223858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.722455   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.722526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.722872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:43.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.223489   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.223805   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:43.223855   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:43.722522   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.722966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.222629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:45.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.223076   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.223835   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:45.223976   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:45.722584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.223060   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.223131   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.223436   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.723272   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.723352   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.723748   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.222452   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.222527   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.722563   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:47.722953   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:48.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:48.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.723001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.222559   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.222906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:49.723031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:50.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.223020   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:50.723341   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.723685   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.223482   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.722933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:52.222677   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:52.223122   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:52.722795   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.722878   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.222957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.223005   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.722713   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.722788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.723170   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:54.723229   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:55.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.222912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:55.722613   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.223191   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.223262   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.223629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.723299   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.723703   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:56.723746   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:57.222463   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.222559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.222925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:57.722623   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.723053   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.222555   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.222627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.222882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:59.222596   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.222674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:59.223071   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:59.722703   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.722774   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.222680   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.722902   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.722974   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.723300   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:01.223304   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.223397   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.223655   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:01.223703   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:01.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.723563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.723888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.222658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.223040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.722720   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.722789   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.723094   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.222643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.722718   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.722800   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.723133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:03.723188   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:04.222478   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.222547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.222820   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:04.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.222941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.722915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:06.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.223136   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.223522   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:06.223575   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:06.723193   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.723275   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.723670   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.223474   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.223549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.223817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.222651   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.222735   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.722864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:08.723216   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:09.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:09.722671   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.722749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.723103   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.222760   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.222832   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.223102   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.722631   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.722988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:11.222770   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.222841   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.223177   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:11.223230   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:11.723479   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.723562   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.222547   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.222623   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.222981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.722772   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.723109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:13.723015   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:14.222726   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.222802   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:14.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.722912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.222538   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.722981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:16.222929   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.223005   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:16.223275   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:16.223314   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:16.723228   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.723311   48683 node_ready.go:38] duration metric: took 6m0.000967258s for node "functional-090986" to be "Ready" ...
	I1206 08:53:16.726672   48683 out.go:203] 
	W1206 08:53:16.729718   48683 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 08:53:16.729749   48683 out.go:285] * 
	* 
	W1206 08:53:16.732326   48683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:53:16.735459   48683 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-090986 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.230556783s for "functional-090986" cluster.
I1206 08:53:17.276935    4292 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (402.610166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-090986 logs -n 25: (1.073321038s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-181746 addons list                                                                                                                           │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ addons         │ functional-181746 addons list -o json                                                                                                                   │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service hello-node-connect --url                                                                                                      │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ start          │ -p functional-181746 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                   │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ service        │ functional-181746 service list                                                                                                                          │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-181746 --alsologtostderr -v=1                                                                                          │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service list -o json                                                                                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service --namespace=default --https --url hello-node                                                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service hello-node --url --format={{.IP}}                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service hello-node --url                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format short --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format yaml --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ ssh            │ functional-181746 ssh pgrep buildkitd                                                                                                                   │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ image          │ functional-181746 image build -t localhost/my-image:functional-181746 testdata/build --alsologtostderr                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format json --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format table --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls                                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ delete         │ -p functional-181746                                                                                                                                    │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ start          │ -p functional-090986 --alsologtostderr -v=8                                                                                                             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:47 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:47:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:47:11.094911   48683 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:47:11.095050   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095060   48683 out.go:374] Setting ErrFile to fd 2...
	I1206 08:47:11.095065   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095329   48683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:47:11.095763   48683 out.go:368] Setting JSON to false
	I1206 08:47:11.096588   48683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1782,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:47:11.096668   48683 start.go:143] virtualization:  
	I1206 08:47:11.100026   48683 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:47:11.103775   48683 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:47:11.103977   48683 notify.go:221] Checking for updates...
	I1206 08:47:11.109719   48683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:47:11.112668   48683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:11.115549   48683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:47:11.118516   48683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:47:11.121495   48683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:47:11.124961   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:11.125074   48683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:47:11.149854   48683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:47:11.149988   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.212959   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.203697623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.213084   48683 docker.go:319] overlay module found
	I1206 08:47:11.216243   48683 out.go:179] * Using the docker driver based on existing profile
	I1206 08:47:11.219285   48683 start.go:309] selected driver: docker
	I1206 08:47:11.219311   48683 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.219451   48683 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:47:11.219560   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.284944   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.27604915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.285369   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:11.285438   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:11.285486   48683 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.289257   48683 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:47:11.292082   48683 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:47:11.295206   48683 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:47:11.298095   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:11.298152   48683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:47:11.298166   48683 cache.go:65] Caching tarball of preloaded images
	I1206 08:47:11.298170   48683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:47:11.298253   48683 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:47:11.298264   48683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:47:11.298374   48683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:47:11.317301   48683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:47:11.317323   48683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:47:11.317345   48683 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:47:11.317377   48683 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:47:11.317445   48683 start.go:364] duration metric: took 50.847µs to acquireMachinesLock for "functional-090986"
	I1206 08:47:11.317466   48683 start.go:96] Skipping create...Using existing machine configuration
	I1206 08:47:11.317471   48683 fix.go:54] fixHost starting: 
	I1206 08:47:11.317772   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:11.334567   48683 fix.go:112] recreateIfNeeded on functional-090986: state=Running err=<nil>
	W1206 08:47:11.334595   48683 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 08:47:11.337684   48683 out.go:252] * Updating the running docker "functional-090986" container ...
	I1206 08:47:11.337717   48683 machine.go:94] provisionDockerMachine start ...
	I1206 08:47:11.337795   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.354534   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.354869   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.354883   48683 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:47:11.507058   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.507088   48683 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:47:11.507161   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.525196   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.525520   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.525537   48683 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:47:11.684471   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.684556   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.702187   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.702515   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.702540   48683 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:47:11.859622   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:47:11.859650   48683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:47:11.859671   48683 ubuntu.go:190] setting up certificates
	I1206 08:47:11.859680   48683 provision.go:84] configureAuth start
	I1206 08:47:11.859747   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:11.877706   48683 provision.go:143] copyHostCerts
	I1206 08:47:11.877750   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877787   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:47:11.877800   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877873   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:47:11.877976   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.877997   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:47:11.878007   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.878035   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:47:11.878088   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878108   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:47:11.878114   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878140   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:47:11.878192   48683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:47:12.018564   48683 provision.go:177] copyRemoteCerts
	I1206 08:47:12.018632   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:47:12.018672   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.036577   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.143156   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 08:47:12.143226   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:47:12.160243   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 08:47:12.160303   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 08:47:12.177568   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 08:47:12.177628   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:47:12.194504   48683 provision.go:87] duration metric: took 334.802128ms to configureAuth
	I1206 08:47:12.194543   48683 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:47:12.194717   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:12.194725   48683 machine.go:97] duration metric: took 857.000255ms to provisionDockerMachine
	I1206 08:47:12.194732   48683 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:47:12.194743   48683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:47:12.194796   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:47:12.194842   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.212073   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.315270   48683 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:47:12.318678   48683 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 08:47:12.318701   48683 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 08:47:12.318706   48683 command_runner.go:130] > VERSION_ID="12"
	I1206 08:47:12.318711   48683 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 08:47:12.318717   48683 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 08:47:12.318720   48683 command_runner.go:130] > ID=debian
	I1206 08:47:12.318724   48683 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 08:47:12.318730   48683 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 08:47:12.318735   48683 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 08:47:12.318975   48683 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:47:12.319002   48683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:47:12.319013   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:47:12.319072   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:47:12.319161   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:47:12.319172   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /etc/ssl/certs/42922.pem
	I1206 08:47:12.319246   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:47:12.319253   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> /etc/test/nested/copy/4292/hosts
	I1206 08:47:12.319298   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:47:12.327031   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:12.344679   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:47:12.363077   48683 start.go:296] duration metric: took 168.329595ms for postStartSetup
	I1206 08:47:12.363152   48683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:47:12.363210   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.380353   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.487060   48683 command_runner.go:130] > 11%
	I1206 08:47:12.487699   48683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:47:12.493338   48683 command_runner.go:130] > 174G
	I1206 08:47:12.494716   48683 fix.go:56] duration metric: took 1.177238165s for fixHost
	I1206 08:47:12.494741   48683 start.go:83] releasing machines lock for "functional-090986", held for 1.177286419s
	I1206 08:47:12.494813   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:12.512960   48683 ssh_runner.go:195] Run: cat /version.json
	I1206 08:47:12.513022   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.513272   48683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:47:12.513331   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.541090   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.554766   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.647127   48683 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 08:47:12.647264   48683 ssh_runner.go:195] Run: systemctl --version
	I1206 08:47:12.750867   48683 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 08:47:12.751021   48683 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 08:47:12.751059   48683 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 08:47:12.751151   48683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 08:47:12.755609   48683 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 08:47:12.756103   48683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:47:12.756176   48683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:47:12.764393   48683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 08:47:12.764420   48683 start.go:496] detecting cgroup driver to use...
	I1206 08:47:12.764452   48683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:47:12.764507   48683 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:47:12.779951   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:47:12.793243   48683 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:47:12.793324   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:47:12.809005   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:47:12.823043   48683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:47:12.939696   48683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:47:13.060632   48683 docker.go:234] disabling docker service ...
	I1206 08:47:13.060721   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:47:13.078332   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:47:13.093719   48683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:47:13.229319   48683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:47:13.368814   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:47:13.381432   48683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:47:13.395011   48683 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 08:47:13.396419   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:47:13.405770   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:47:13.415310   48683 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:47:13.415505   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:47:13.424963   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.433399   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:47:13.442072   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.450816   48683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:47:13.458824   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:47:13.467776   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:47:13.477145   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:47:13.486457   48683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:47:13.493910   48683 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 08:47:13.494986   48683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:47:13.503356   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:13.622996   48683 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:47:13.753042   48683 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:47:13.753133   48683 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:47:13.757647   48683 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 08:47:13.757672   48683 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 08:47:13.757681   48683 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1206 08:47:13.757689   48683 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:13.757724   48683 command_runner.go:130] > Access: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757736   48683 command_runner.go:130] > Modify: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757742   48683 command_runner.go:130] > Change: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757746   48683 command_runner.go:130] >  Birth: -
	I1206 08:47:13.757803   48683 start.go:564] Will wait 60s for crictl version
	I1206 08:47:13.757883   48683 ssh_runner.go:195] Run: which crictl
	I1206 08:47:13.761846   48683 command_runner.go:130] > /usr/local/bin/crictl
	I1206 08:47:13.761974   48683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:47:13.786269   48683 command_runner.go:130] > Version:  0.1.0
	I1206 08:47:13.786289   48683 command_runner.go:130] > RuntimeName:  containerd
	I1206 08:47:13.786295   48683 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 08:47:13.786302   48683 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 08:47:13.788604   48683 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:47:13.788708   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.809864   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.811926   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.831700   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.839817   48683 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:47:13.842721   48683 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:47:13.858999   48683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:47:13.862710   48683 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 08:47:13.862939   48683 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:47:13.863057   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:13.863132   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.889556   48683 command_runner.go:130] > {
	I1206 08:47:13.889580   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.889586   48683 command_runner.go:130] >     {
	I1206 08:47:13.889601   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.889607   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889612   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.889616   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889619   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889628   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.889635   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889640   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.889652   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889657   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889661   48683 command_runner.go:130] >     },
	I1206 08:47:13.889664   48683 command_runner.go:130] >     {
	I1206 08:47:13.889672   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.889676   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889681   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.889687   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889691   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889707   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.889710   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889715   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.889725   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889729   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889733   48683 command_runner.go:130] >     },
	I1206 08:47:13.889736   48683 command_runner.go:130] >     {
	I1206 08:47:13.889743   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.889752   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889767   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.889770   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889777   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889785   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.889792   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889796   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.889800   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.889808   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889815   48683 command_runner.go:130] >     },
	I1206 08:47:13.889818   48683 command_runner.go:130] >     {
	I1206 08:47:13.889825   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.889829   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889837   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.889841   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889844   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889852   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.889863   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889867   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.889871   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889875   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889885   48683 command_runner.go:130] >       },
	I1206 08:47:13.889889   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889892   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889896   48683 command_runner.go:130] >     },
	I1206 08:47:13.889899   48683 command_runner.go:130] >     {
	I1206 08:47:13.889906   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.889912   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889918   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.889920   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889925   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889933   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.889937   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889945   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.889949   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889960   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889964   48683 command_runner.go:130] >       },
	I1206 08:47:13.889970   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889975   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889987   48683 command_runner.go:130] >     },
	I1206 08:47:13.890022   48683 command_runner.go:130] >     {
	I1206 08:47:13.890033   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.890037   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890043   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.890049   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890054   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890064   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.890070   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890075   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.890078   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890082   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890087   48683 command_runner.go:130] >       },
	I1206 08:47:13.890092   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890098   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890102   48683 command_runner.go:130] >     },
	I1206 08:47:13.890105   48683 command_runner.go:130] >     {
	I1206 08:47:13.890112   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.890115   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890121   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.890124   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890139   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.890145   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890149   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.890153   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890156   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890159   48683 command_runner.go:130] >     },
	I1206 08:47:13.890170   48683 command_runner.go:130] >     {
	I1206 08:47:13.890177   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.890181   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890187   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.890190   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890206   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.890215   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890223   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.890228   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890231   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890235   48683 command_runner.go:130] >       },
	I1206 08:47:13.890239   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890250   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890254   48683 command_runner.go:130] >     },
	I1206 08:47:13.890257   48683 command_runner.go:130] >     {
	I1206 08:47:13.890264   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.890272   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890277   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.890280   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890284   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890291   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.890294   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890298   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.890305   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890310   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.890315   48683 command_runner.go:130] >       },
	I1206 08:47:13.890319   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890331   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.890335   48683 command_runner.go:130] >     }
	I1206 08:47:13.890337   48683 command_runner.go:130] >   ]
	I1206 08:47:13.890340   48683 command_runner.go:130] > }
	I1206 08:47:13.892630   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.892653   48683 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:47:13.892734   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.915064   48683 command_runner.go:130] > {
	I1206 08:47:13.915085   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.915091   48683 command_runner.go:130] >     {
	I1206 08:47:13.915102   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.915109   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915115   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.915119   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915142   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.915149   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915153   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.915157   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915161   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915164   48683 command_runner.go:130] >     },
	I1206 08:47:13.915167   48683 command_runner.go:130] >     {
	I1206 08:47:13.915178   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.915184   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915189   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.915193   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915208   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.915214   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915218   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.915222   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915225   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915228   48683 command_runner.go:130] >     },
	I1206 08:47:13.915231   48683 command_runner.go:130] >     {
	I1206 08:47:13.915238   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.915245   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915251   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.915254   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915262   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915270   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.915275   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915279   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.915286   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.915291   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915295   48683 command_runner.go:130] >     },
	I1206 08:47:13.915298   48683 command_runner.go:130] >     {
	I1206 08:47:13.915305   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.915311   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915320   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.915324   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915328   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915338   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.915341   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915345   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.915349   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915352   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915359   48683 command_runner.go:130] >       },
	I1206 08:47:13.915363   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915410   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915414   48683 command_runner.go:130] >     },
	I1206 08:47:13.915418   48683 command_runner.go:130] >     {
	I1206 08:47:13.915424   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.915428   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915434   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.915437   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915441   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915448   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.915451   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915455   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.915458   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915471   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915474   48683 command_runner.go:130] >       },
	I1206 08:47:13.915478   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915481   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915484   48683 command_runner.go:130] >     },
	I1206 08:47:13.915487   48683 command_runner.go:130] >     {
	I1206 08:47:13.915494   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.915497   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915503   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.915506   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915509   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915523   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.915526   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915530   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.915534   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915540   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915543   48683 command_runner.go:130] >       },
	I1206 08:47:13.915547   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915550   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915553   48683 command_runner.go:130] >     },
	I1206 08:47:13.915556   48683 command_runner.go:130] >     {
	I1206 08:47:13.915563   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.915580   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915585   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.915588   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915592   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915601   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.915608   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915612   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.915616   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915620   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915622   48683 command_runner.go:130] >     },
	I1206 08:47:13.915626   48683 command_runner.go:130] >     {
	I1206 08:47:13.915635   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.915649   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915655   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.915658   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915662   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915670   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.915676   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915680   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.915684   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915687   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915691   48683 command_runner.go:130] >       },
	I1206 08:47:13.915699   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915706   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915710   48683 command_runner.go:130] >     },
	I1206 08:47:13.915713   48683 command_runner.go:130] >     {
	I1206 08:47:13.915720   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.915723   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915728   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.915731   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915735   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915746   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.915752   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915756   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.915760   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915764   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.915777   48683 command_runner.go:130] >       },
	I1206 08:47:13.915781   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915785   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.915790   48683 command_runner.go:130] >     }
	I1206 08:47:13.915793   48683 command_runner.go:130] >   ]
	I1206 08:47:13.915796   48683 command_runner.go:130] > }
	I1206 08:47:13.917976   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.917998   48683 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:47:13.918006   48683 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:47:13.918108   48683 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:47:13.918181   48683 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:47:13.946472   48683 command_runner.go:130] > {
	I1206 08:47:13.946489   48683 command_runner.go:130] >   "cniconfig": {
	I1206 08:47:13.946494   48683 command_runner.go:130] >     "Networks": [
	I1206 08:47:13.946497   48683 command_runner.go:130] >       {
	I1206 08:47:13.946502   48683 command_runner.go:130] >         "Config": {
	I1206 08:47:13.946507   48683 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 08:47:13.946512   48683 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 08:47:13.946516   48683 command_runner.go:130] >           "Plugins": [
	I1206 08:47:13.946520   48683 command_runner.go:130] >             {
	I1206 08:47:13.946524   48683 command_runner.go:130] >               "Network": {
	I1206 08:47:13.946529   48683 command_runner.go:130] >                 "ipam": {},
	I1206 08:47:13.946537   48683 command_runner.go:130] >                 "type": "loopback"
	I1206 08:47:13.946541   48683 command_runner.go:130] >               },
	I1206 08:47:13.946554   48683 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 08:47:13.946558   48683 command_runner.go:130] >             }
	I1206 08:47:13.946561   48683 command_runner.go:130] >           ],
	I1206 08:47:13.946573   48683 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 08:47:13.946581   48683 command_runner.go:130] >         },
	I1206 08:47:13.946586   48683 command_runner.go:130] >         "IFName": "lo"
	I1206 08:47:13.946590   48683 command_runner.go:130] >       }
	I1206 08:47:13.946593   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946597   48683 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 08:47:13.946601   48683 command_runner.go:130] >     "PluginDirs": [
	I1206 08:47:13.946605   48683 command_runner.go:130] >       "/opt/cni/bin"
	I1206 08:47:13.946609   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946613   48683 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 08:47:13.946617   48683 command_runner.go:130] >     "Prefix": "eth"
	I1206 08:47:13.946620   48683 command_runner.go:130] >   },
	I1206 08:47:13.946623   48683 command_runner.go:130] >   "config": {
	I1206 08:47:13.946627   48683 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 08:47:13.946630   48683 command_runner.go:130] >       "/etc/cdi",
	I1206 08:47:13.946636   48683 command_runner.go:130] >       "/var/run/cdi"
	I1206 08:47:13.946640   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946643   48683 command_runner.go:130] >     "cni": {
	I1206 08:47:13.946646   48683 command_runner.go:130] >       "binDir": "",
	I1206 08:47:13.946650   48683 command_runner.go:130] >       "binDirs": [
	I1206 08:47:13.946653   48683 command_runner.go:130] >         "/opt/cni/bin"
	I1206 08:47:13.946656   48683 command_runner.go:130] >       ],
	I1206 08:47:13.946661   48683 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 08:47:13.946665   48683 command_runner.go:130] >       "confTemplate": "",
	I1206 08:47:13.946668   48683 command_runner.go:130] >       "ipPref": "",
	I1206 08:47:13.946672   48683 command_runner.go:130] >       "maxConfNum": 1,
	I1206 08:47:13.946676   48683 command_runner.go:130] >       "setupSerially": false,
	I1206 08:47:13.946680   48683 command_runner.go:130] >       "useInternalLoopback": false
	I1206 08:47:13.946683   48683 command_runner.go:130] >     },
	I1206 08:47:13.946688   48683 command_runner.go:130] >     "containerd": {
	I1206 08:47:13.946696   48683 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 08:47:13.946701   48683 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 08:47:13.946706   48683 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 08:47:13.946710   48683 command_runner.go:130] >       "runtimes": {
	I1206 08:47:13.946713   48683 command_runner.go:130] >         "runc": {
	I1206 08:47:13.946718   48683 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 08:47:13.946722   48683 command_runner.go:130] >           "PodAnnotations": null,
	I1206 08:47:13.946728   48683 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 08:47:13.946733   48683 command_runner.go:130] >           "cgroupWritable": false,
	I1206 08:47:13.946738   48683 command_runner.go:130] >           "cniConfDir": "",
	I1206 08:47:13.946742   48683 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 08:47:13.946745   48683 command_runner.go:130] >           "io_type": "",
	I1206 08:47:13.946748   48683 command_runner.go:130] >           "options": {
	I1206 08:47:13.946752   48683 command_runner.go:130] >             "BinaryName": "",
	I1206 08:47:13.946756   48683 command_runner.go:130] >             "CriuImagePath": "",
	I1206 08:47:13.946761   48683 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 08:47:13.946764   48683 command_runner.go:130] >             "IoGid": 0,
	I1206 08:47:13.946768   48683 command_runner.go:130] >             "IoUid": 0,
	I1206 08:47:13.946772   48683 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 08:47:13.946776   48683 command_runner.go:130] >             "Root": "",
	I1206 08:47:13.946780   48683 command_runner.go:130] >             "ShimCgroup": "",
	I1206 08:47:13.946784   48683 command_runner.go:130] >             "SystemdCgroup": false
	I1206 08:47:13.946787   48683 command_runner.go:130] >           },
	I1206 08:47:13.946793   48683 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 08:47:13.946799   48683 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 08:47:13.946803   48683 command_runner.go:130] >           "runtimePath": "",
	I1206 08:47:13.946808   48683 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 08:47:13.946812   48683 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 08:47:13.946816   48683 command_runner.go:130] >           "snapshotter": ""
	I1206 08:47:13.946820   48683 command_runner.go:130] >         }
	I1206 08:47:13.946823   48683 command_runner.go:130] >       }
	I1206 08:47:13.946826   48683 command_runner.go:130] >     },
	I1206 08:47:13.946836   48683 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 08:47:13.946848   48683 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 08:47:13.946854   48683 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 08:47:13.946858   48683 command_runner.go:130] >     "disableApparmor": false,
	I1206 08:47:13.946863   48683 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 08:47:13.946867   48683 command_runner.go:130] >     "disableProcMount": false,
	I1206 08:47:13.946871   48683 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 08:47:13.946874   48683 command_runner.go:130] >     "enableCDI": true,
	I1206 08:47:13.946878   48683 command_runner.go:130] >     "enableSelinux": false,
	I1206 08:47:13.946883   48683 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 08:47:13.946887   48683 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 08:47:13.946891   48683 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 08:47:13.946896   48683 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 08:47:13.946900   48683 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 08:47:13.946905   48683 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 08:47:13.946909   48683 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 08:47:13.946917   48683 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946922   48683 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 08:47:13.946928   48683 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946932   48683 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 08:47:13.946937   48683 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 08:47:13.946940   48683 command_runner.go:130] >   },
	I1206 08:47:13.946943   48683 command_runner.go:130] >   "features": {
	I1206 08:47:13.946948   48683 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 08:47:13.946951   48683 command_runner.go:130] >   },
	I1206 08:47:13.946955   48683 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 08:47:13.946964   48683 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946974   48683 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946977   48683 command_runner.go:130] >   "runtimeHandlers": [
	I1206 08:47:13.946980   48683 command_runner.go:130] >     {
	I1206 08:47:13.946984   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.946988   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.946992   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.946996   48683 command_runner.go:130] >       }
	I1206 08:47:13.947002   48683 command_runner.go:130] >     },
	I1206 08:47:13.947006   48683 command_runner.go:130] >     {
	I1206 08:47:13.947009   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.947015   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.947019   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.947022   48683 command_runner.go:130] >       },
	I1206 08:47:13.947026   48683 command_runner.go:130] >       "name": "runc"
	I1206 08:47:13.947029   48683 command_runner.go:130] >     }
	I1206 08:47:13.947032   48683 command_runner.go:130] >   ],
	I1206 08:47:13.947035   48683 command_runner.go:130] >   "status": {
	I1206 08:47:13.947039   48683 command_runner.go:130] >     "conditions": [
	I1206 08:47:13.947042   48683 command_runner.go:130] >       {
	I1206 08:47:13.947046   48683 command_runner.go:130] >         "message": "",
	I1206 08:47:13.947050   48683 command_runner.go:130] >         "reason": "",
	I1206 08:47:13.947053   48683 command_runner.go:130] >         "status": true,
	I1206 08:47:13.947059   48683 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 08:47:13.947062   48683 command_runner.go:130] >       },
	I1206 08:47:13.947065   48683 command_runner.go:130] >       {
	I1206 08:47:13.947072   48683 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 08:47:13.947081   48683 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 08:47:13.947085   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947089   48683 command_runner.go:130] >         "type": "NetworkReady"
	I1206 08:47:13.947091   48683 command_runner.go:130] >       },
	I1206 08:47:13.947094   48683 command_runner.go:130] >       {
	I1206 08:47:13.947118   48683 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 08:47:13.947123   48683 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 08:47:13.947129   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947134   48683 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 08:47:13.947137   48683 command_runner.go:130] >       }
	I1206 08:47:13.947139   48683 command_runner.go:130] >     ]
	I1206 08:47:13.947142   48683 command_runner.go:130] >   }
	I1206 08:47:13.947144   48683 command_runner.go:130] > }
	I1206 08:47:13.947502   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:13.947519   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:13.947541   48683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:47:13.947564   48683 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:47:13.947673   48683 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:47:13.947742   48683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:47:13.955523   48683 command_runner.go:130] > kubeadm
	I1206 08:47:13.955542   48683 command_runner.go:130] > kubectl
	I1206 08:47:13.955546   48683 command_runner.go:130] > kubelet
	I1206 08:47:13.955560   48683 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:47:13.955622   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:47:13.963242   48683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:47:13.976514   48683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:47:13.994365   48683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 08:47:14.008131   48683 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:47:14.012074   48683 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 08:47:14.012170   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:14.162349   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:14.970935   48683 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:47:14.971004   48683 certs.go:195] generating shared ca certs ...
	I1206 08:47:14.971035   48683 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:14.971212   48683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:47:14.971308   48683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:47:14.971340   48683 certs.go:257] generating profile certs ...
	I1206 08:47:14.971529   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:47:14.971755   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:47:14.971844   48683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:47:14.971869   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 08:47:14.971914   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 08:47:14.971945   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 08:47:14.971989   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 08:47:14.972021   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 08:47:14.972053   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 08:47:14.972085   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 08:47:14.972115   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 08:47:14.972198   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:47:14.972259   48683 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:47:14.972284   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:47:14.972342   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:47:14.972394   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:47:14.972452   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:47:14.972528   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:14.972579   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:14.972619   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem -> /usr/share/ca-certificates/4292.pem
	I1206 08:47:14.972659   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /usr/share/ca-certificates/42922.pem
	I1206 08:47:14.973224   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:47:14.995297   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:47:15.042161   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:47:15.062885   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:47:15.082018   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:47:15.101436   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:47:15.120061   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:47:15.140257   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:47:15.160107   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:47:15.178980   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:47:15.197893   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:47:15.216224   48683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:47:15.229330   48683 ssh_runner.go:195] Run: openssl version
	I1206 08:47:15.235331   48683 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 08:47:15.235817   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.243429   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:47:15.250764   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254643   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254673   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254723   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.295906   48683 command_runner.go:130] > b5213941
	I1206 08:47:15.295990   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:47:15.303441   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.310784   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:47:15.318504   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322051   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322380   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322461   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.363237   48683 command_runner.go:130] > 51391683
	I1206 08:47:15.363703   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:47:15.371299   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.378918   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:47:15.386367   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390281   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390354   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390410   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.431004   48683 command_runner.go:130] > 3ec20f2e
	I1206 08:47:15.431441   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:47:15.439072   48683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442819   48683 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442856   48683 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 08:47:15.442863   48683 command_runner.go:130] > Device: 259,1	Inode: 1055659     Links: 1
	I1206 08:47:15.442870   48683 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:15.442877   48683 command_runner.go:130] > Access: 2025-12-06 08:43:07.824678266 +0000
	I1206 08:47:15.442882   48683 command_runner.go:130] > Modify: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442890   48683 command_runner.go:130] > Change: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442895   48683 command_runner.go:130] >  Birth: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442956   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 08:47:15.483144   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.483601   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 08:47:15.524376   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.524527   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 08:47:15.567333   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.567897   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 08:47:15.609722   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.610195   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 08:47:15.652939   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.653458   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 08:47:15.694815   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.695278   48683 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:15.695370   48683 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:47:15.695465   48683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:47:15.724990   48683 cri.go:89] found id: ""
	I1206 08:47:15.725064   48683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:47:15.732181   48683 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 08:47:15.732210   48683 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 08:47:15.732217   48683 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 08:47:15.733102   48683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 08:47:15.733116   48683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 08:47:15.733169   48683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 08:47:15.740768   48683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:47:15.741168   48683 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-090986" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.741273   48683 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "functional-090986" cluster setting kubeconfig missing "functional-090986" context setting]
	I1206 08:47:15.741558   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.741975   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.742128   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.742650   48683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 08:47:15.742669   48683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 08:47:15.742675   48683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 08:47:15.742680   48683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 08:47:15.742685   48683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 08:47:15.742976   48683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 08:47:15.743070   48683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 08:47:15.750828   48683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 08:47:15.750861   48683 kubeadm.go:602] duration metric: took 17.739612ms to restartPrimaryControlPlane
	I1206 08:47:15.750871   48683 kubeadm.go:403] duration metric: took 55.600148ms to StartCluster
	I1206 08:47:15.750890   48683 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.750966   48683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.751639   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.751842   48683 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 08:47:15.752180   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:15.752232   48683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 08:47:15.752302   48683 addons.go:70] Setting storage-provisioner=true in profile "functional-090986"
	I1206 08:47:15.752319   48683 addons.go:239] Setting addon storage-provisioner=true in "functional-090986"
	I1206 08:47:15.752322   48683 addons.go:70] Setting default-storageclass=true in profile "functional-090986"
	I1206 08:47:15.752340   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.752341   48683 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-090986"
	I1206 08:47:15.752637   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.752784   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.759188   48683 out.go:179] * Verifying Kubernetes components...
	I1206 08:47:15.762058   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:15.783651   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.783826   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.785192   48683 addons.go:239] Setting addon default-storageclass=true in "functional-090986"
	I1206 08:47:15.785238   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.785700   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.797451   48683 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 08:47:15.800625   48683 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:15.800648   48683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 08:47:15.800725   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.810048   48683 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:15.810080   48683 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 08:47:15.810147   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.824818   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.853374   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.963935   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:15.994167   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.016409   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:16.722308   48683 node_ready.go:35] waiting up to 6m0s for node "functional-090986" to be "Ready" ...
	I1206 08:47:16.722441   48683 type.go:168] "Request Body" body=""
	I1206 08:47:16.722509   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:16.722791   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:16.722979   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722997   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723021   48683 retry.go:31] will retry after 246.599259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722932   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723088   48683 retry.go:31] will retry after 155.728524ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.879530   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.938491   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.942697   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.942739   48683 retry.go:31] will retry after 198.095926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.969843   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.032387   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.037081   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.037167   48683 retry.go:31] will retry after 340.655262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.141488   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:17.200483   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.200581   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.200607   48683 retry.go:31] will retry after 823.921965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:17.378343   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.437909   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.437949   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.437997   48683 retry.go:31] will retry after 597.373907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.723431   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.723506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.723862   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.025532   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:18.036222   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:18.102548   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.106195   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.106289   48683 retry.go:31] will retry after 988.595122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128444   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.128537   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128579   48683 retry.go:31] will retry after 1.22957213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.222734   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.222810   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.223190   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.722737   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.722827   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.723191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:18.723277   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:19.095767   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:19.151460   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.155168   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.155201   48683 retry.go:31] will retry after 1.717558752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.223503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.223595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.223937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:19.358372   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:19.411770   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.415269   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.415303   48683 retry.go:31] will retry after 781.287082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.722942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.197734   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:20.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.223196   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.223547   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.262283   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.262363   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.262407   48683 retry.go:31] will retry after 1.829414459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.723284   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:20.723338   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:20.873661   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:20.932799   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.936985   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.937020   48683 retry.go:31] will retry after 2.554499586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:21.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.223934   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:21.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.722674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.723048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.092657   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:22.149785   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:22.153326   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.153368   48683 retry.go:31] will retry after 2.084938041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.222743   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.722901   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.722987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.723330   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:22.723402   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:23.223196   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.223285   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.223660   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:23.492173   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:23.557652   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:23.557715   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.557741   48683 retry.go:31] will retry after 4.19827742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.723091   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.723482   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.223263   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.223339   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.223623   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.238906   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:24.307275   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:24.307320   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.307339   48683 retry.go:31] will retry after 4.494270685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.722877   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.723244   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:25.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.223006   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.223365   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:25.223455   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:25.723213   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.723596   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.223491   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.223588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.722621   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.722699   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.222525   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.222628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.222892   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:27.723035   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:27.756528   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:27.814954   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:27.818792   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:27.818824   48683 retry.go:31] will retry after 5.399057422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.223490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.223811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.723414   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.723485   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.723794   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.802108   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:28.864913   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:28.864953   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.864972   48683 retry.go:31] will retry after 3.285056528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:29.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.223556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.223857   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:29.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.723030   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:29.723087   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:30.222650   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.222720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.223035   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:30.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.222982   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.223061   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.723202   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.723273   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.723614   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:31.723661   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:32.150291   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:32.207920   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:32.211781   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.211813   48683 retry.go:31] will retry after 10.805243336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.223541   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:32.723329   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.723744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.218182   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:33.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.222677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.295753   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:33.295946   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.295967   48683 retry.go:31] will retry after 9.227502372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:33.723973   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:34.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.222681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.223037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:34.723424   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.723499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.723811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.222963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.722678   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.723029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:36.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.223195   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.223476   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:36.223516   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:36.723305   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.723388   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.723674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.223484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.223557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.223866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.723315   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.723395   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.723693   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:38.223481   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.223887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:38.223937   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:38.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.723024   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.223350   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.223435   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.223711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.723507   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.723587   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.723926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.222602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.723573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.723901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:40.723952   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:41.222532   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.222606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:41.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.723083   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.222810   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.222891   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.223201   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.523700   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:42.586651   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:42.586695   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.586713   48683 retry.go:31] will retry after 12.2898811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.723024   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.723100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.723445   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:43.017838   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:43.079371   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:43.079435   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.079458   48683 retry.go:31] will retry after 19.494910144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.222603   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:43.223199   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:43.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.722697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.722959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.222614   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.722637   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.723067   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:45.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:45.223228   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:45.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.722969   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.223003   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.223089   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.223469   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.723273   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.723345   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.723681   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:47.223099   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.223167   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.223496   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:47.223542   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:47.723308   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.723392   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.223454   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.223802   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.722998   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.722484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.722561   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.722823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:49.722870   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:50.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.222659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:50.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.723086   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.222858   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.222936   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.723311   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.723634   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:51.723682   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:52.223453   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.223522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.223869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:52.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.722897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.222985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.722770   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.723108   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.222905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:54.222955   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:54.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.722660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.723065   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.877464   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:54.933804   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:54.937955   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:54.937987   48683 retry.go:31] will retry after 17.91075527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:55.223442   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.223852   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:55.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.722606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:56.222999   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.223070   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:56.223487   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:56.723218   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.723287   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.723646   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.223125   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.223203   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.223494   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.722995   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.723069   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.723443   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:58.223117   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.223189   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.223566   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:58.223620   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:58.723372   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.723711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.223465   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.223912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.722939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:00.247414   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.247503   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.247882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:00.247935   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:00.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.722938   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.222887   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.222999   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.223358   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.723162   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.723235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.723597   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.223493   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.223823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.575367   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:02.637904   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:02.637958   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.637977   48683 retry.go:31] will retry after 12.943468008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.723120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.723231   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.723512   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:02.723552   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:03.223325   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.223416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.223738   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:03.723412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.723492   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.222479   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.222557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.222823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.722559   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:05.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.222783   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:05.223222   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.722946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.223159   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.223264   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.223665   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.723461   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.723536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.723855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.222524   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.222592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.722594   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.722670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.723027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:07.723084   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:08.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.222686   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.223036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:08.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.222614   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:10.223441   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.223507   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.223798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:10.223853   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:10.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.723077   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.222928   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.223022   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.223407   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.723308   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.723611   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.223404   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.223497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:12.223876   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:12.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.849275   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:12.904952   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:12.908634   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:12.908667   48683 retry.go:31] will retry after 25.236445918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:13.223053   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.223119   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.223405   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:13.723248   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.723328   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.723664   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:14.223478   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.223874   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:14.223925   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:14.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.222959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.582577   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:15.646326   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:15.649856   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.649887   48683 retry.go:31] will retry after 20.09954841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.723221   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.723656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.222526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.222836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.723594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.723935   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:16.723996   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:17.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.222939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:17.722495   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.722573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.722891   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.722663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:19.222693   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.222763   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:19.223069   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:19.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.223016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.723476   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.723548   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.723815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:21.222793   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.222863   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.223194   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:21.223251   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:21.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.722963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.222954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.222759   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.722801   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.722870   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.723132   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:23.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:24.222563   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:24.722536   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.722956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.223202   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.223267   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.723347   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.723448   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:25.723881   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:26.222857   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.222930   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.223262   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:26.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.722570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.222528   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.222598   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.222916   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.722616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.722695   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.723043   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:28.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.222622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.222922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:28.222981   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:28.722606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.723095   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.722674   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.722742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:30.222789   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.222864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.223189   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:30.223256   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:30.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.223492   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.223567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.222773   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.223092   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.722517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.722582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.722842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:32.722882   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:33.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:33.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.722591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.223311   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.223394   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.223656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.723571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:34.723969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:35.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.222962   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.722532   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.722854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.750369   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:35.818338   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818385   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818494   48683 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:36.223177   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.223245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.223588   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:36.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.723369   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:37.223358   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.223441   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.223795   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:37.223851   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:37.723459   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.723923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.145414   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:38.206093   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210075   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210171   48683 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:38.213345   48683 out.go:179] * Enabled addons: 
	I1206 08:48:38.217127   48683 addons.go:530] duration metric: took 1m22.464883403s for enable addons: enabled=[]
	I1206 08:48:38.223238   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.223680   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.723466   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.723871   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.222501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.722607   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.723013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:39.723066   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:40.222676   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.222756   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:40.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.722649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.223120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.223622   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.723403   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.723475   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:41.723873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:42.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:42.722684   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.723129   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.222817   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.222915   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.223184   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:44.222598   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.223013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:44.223067   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:44.722714   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.222844   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.222932   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.223348   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.723174   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.723260   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.723605   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.222507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.222918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.722952   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:46.723007   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:47.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:47.722496   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.722563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.722826   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.222616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.722784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.723121   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:48.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:49.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.222915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:49.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.722727   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.723082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.222645   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.222761   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.223073   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.722569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:51.222952   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.223025   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.223425   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:51.223480   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:51.723105   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.723185   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.723538   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.223323   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.223409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.223689   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.722451   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.222606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.223017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.722735   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.722801   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.723122   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:53.723177   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:54.222846   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.222924   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.223260   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:54.722973   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.723056   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.723447   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.223281   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.223354   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.223701   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.723485   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.723577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.723911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:55.723962   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:56.222980   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.223059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.223408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:56.723182   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.723251   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.723637   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.223421   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.223498   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.722566   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:58.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.222866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:58.222905   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:58.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.722681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.723002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.222616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.222687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.722925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:00.222639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.222712   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:00.223060   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:00.722635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.723063   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.223028   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.223234   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.223616   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.723323   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.723798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:02.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.223571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:02.223997   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:02.722537   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.722919   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.222635   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.222942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.722533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.222483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.222897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.722446   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.722517   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:04.722879   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:05.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.222673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:05.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.723669   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.223552   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.223906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.722590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:06.722967   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:07.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.222610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.222868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:07.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.723006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.222578   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.222672   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.722492   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.722560   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:09.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:09.223046   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:09.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.222600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:11.222977   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.223048   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.224357   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1206 08:49:11.224412   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:11.722526   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.722867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.222693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.223079   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.722676   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.722753   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.723090   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.222608   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.722719   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.723062   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:13.723117   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:14.222784   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.222858   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.722588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.722847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.222870   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.222963   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.722751   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.722830   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:15.723220   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:16.223389   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.223501   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.223841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:16.723482   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.723936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.222580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.722456   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.722830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:18.222500   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.222575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:18.222970   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:18.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.722957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.223415   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.223481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.223744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.723518   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.723592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.723932   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:20.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.222604   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:20.223052   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:20.723464   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.723877   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.222834   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.222916   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.223278   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:22.222535   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:22.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:22.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.723051   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.222710   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:24.222593   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.222679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.223059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:24.223115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:24.722807   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.722887   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.723288   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.223044   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.223114   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.223419   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.723206   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.723645   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.222468   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.222888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.722538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.722868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:26.722924   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:27.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.222966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:27.722668   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.722745   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.723116   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.222806   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.222880   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.223155   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.723088   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:28.723155   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:29.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.222755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:29.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.722634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.722895   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.222645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.222996   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.722682   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.722768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.723166   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:30.723221   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:31.223009   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.223094   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.223410   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:31.723177   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.723629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.223466   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.722617   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.722984   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:33.222572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.222977   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:33.223031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:33.722723   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.722796   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.723147   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.222719   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.222791   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.223074   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.722746   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.722818   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.723175   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:35.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.222977   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.223336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:35.223421   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:35.723153   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.723223   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.723599   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.223510   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.223602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.223964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.222842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.722645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:37.723047   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:38.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.223119   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:38.722610   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.222653   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.223084   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.722817   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.723225   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:39.723274   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:40.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.223023   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:40.722742   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.722820   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.723169   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.222970   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.223060   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.723194   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.723270   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.723557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:41.723610   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:42.223426   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.223508   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.223855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:42.722579   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.722655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.723008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.222519   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.222591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.222864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.722618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.722917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:44.222612   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.223025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:44.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:44.722483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.722565   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:46.223140   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.223214   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:46.223591   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:46.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.723406   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.723743   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.222537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.722771   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.722869   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.723227   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:48.723289   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:49.222922   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.223256   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:49.723128   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.723204   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.723574   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.223491   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.223824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.722515   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.722856   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:51.223538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.223610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.223931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:51.223984   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:51.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.722574   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.222457   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.222799   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.222986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.723440   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.723514   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.723868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:53.723922   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:54.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.222982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:54.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.723007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.222545   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.222641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.222936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.723009   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:56.223162   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.223235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.223592   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:56.223647   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:56.723346   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.723440   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.223563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.224002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.722697   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.723097   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.222803   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.222876   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:58.723019   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:59.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:59.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.723878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.222804   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.223133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.722691   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.723059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:00.723115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:01.222895   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.223286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:01.722600   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.723014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.722815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:03.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.222573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.222946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:03.222994   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:03.722541   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.222608   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.222676   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.722602   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:05.222818   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.222895   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.223192   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:05.223237   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:05.722878   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.722947   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.723266   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.223357   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.223444   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.223770   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.722470   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.722567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.722904   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.222961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.722668   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.723032   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:07.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:08.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:08.722770   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.722843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.723145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.222860   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.722567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:10.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.222734   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.223056   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:10.223102   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:10.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.722688   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.723026   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.222874   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.222955   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.223305   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.723062   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.723127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.723408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:12.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.223261   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.223620   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:12.223677   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:12.723275   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.723355   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.223538   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.223808   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.722888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.222590   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.222669   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.722580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.722918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:14.722969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:15.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:15.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.223255   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.223535   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.723326   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.723416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.723757   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:16.723819   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:17.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.222577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.222914   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:17.722474   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.722547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.722850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.222583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.722776   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.723111   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:19.222505   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.222594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.222859   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:19.222907   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:19.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.222684   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.223168   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.722431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.722497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.722767   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:21.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:21.223132   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:21.722822   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.723237   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.222916   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.222997   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.223321   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.723124   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.723201   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.723551   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:23.223344   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.223446   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.223810   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:23.223863   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:23.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.222565   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.722582   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.723045   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.222675   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.222956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.722490   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.722558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.722858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:25.722902   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:26.222992   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.223066   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:26.723227   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.723619   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.223499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:27.723024   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:28.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.222853   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:28.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.722950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.722755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.723172   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:29.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:30.222914   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.222992   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:30.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.722926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.222879   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.223214   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:32.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:32.222979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:32.722487   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.722557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.722887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:34.723033   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:35.222710   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.223174   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:35.722627   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.223126   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.223207   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.223553   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.723209   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.723639   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:36.723696   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:37.223303   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.223402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.223672   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:37.723463   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.723537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.723869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.222903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.723171   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.723241   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.723601   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:39.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.223483   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.223848   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:39.223901   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:39.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.222657   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.722746   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.723061   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.222889   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.222968   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.223319   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.722996   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.723258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:41.723307   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:42.223105   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.223674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:42.723351   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.723771   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.222466   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.222542   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.222830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.723509   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.723588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.723950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:43.724004   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:44.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.222958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:44.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.722910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.222798   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.223002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.223897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:46.224920   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.224987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.225286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:46.225327   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:46.723066   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.723140   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.723458   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.223236   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.223694   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.723477   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.723544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.723809   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.222493   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:48.723048   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:49.222689   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:49.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.222657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.223048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.722578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:51.222958   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.223044   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.223428   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:51.223484   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:51.723238   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.723326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.723667   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.223431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.223506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.223847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.723473   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.723546   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.723905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.222927   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.723407   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.723477   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.723780   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:53.723831   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:54.223258   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.223334   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.223684   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:54.723481   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.723559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.723887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.222605   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.222908   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:56.223029   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.223100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.223448   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:56.223504   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:56.723288   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.723362   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.723641   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.223504   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.223865   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.722469   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.722544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.722884   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.222923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.722693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.723034   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:58.723089   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:59.222617   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.223050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:59.722758   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.722838   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.222649   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.222741   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.722924   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.723002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.723336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:00.723407   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:01.223151   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.223227   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.223550   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:01.723316   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.723407   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.723750   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.222569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.722882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:03.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:03.223074   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:03.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.222840   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.722628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.722970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:05.222698   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.222780   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.223093   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:05.223142   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:05.722471   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.722549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.722864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.223041   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.223120   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.223579   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.723396   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.723824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.222893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.722605   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.722673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.723011   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:07.723085   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:08.222754   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.222842   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.223191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:08.722662   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.722736   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.723038   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.222745   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.223142   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.722861   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.723235   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:09.723279   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:10.222634   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:10.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.722638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.722937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.223528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.223600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.723112   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.723177   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.723461   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:11.723503   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:12.223246   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.223682   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:12.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.723593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.723946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.222536   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.722958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:14.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:14.223043   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:14.722452   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.722845   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.222537   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.222613   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.722573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.723240   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:16.222683   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.222764   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:16.223086   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:16.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.723021   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.722565   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.722636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:18.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:19.222506   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.222917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:19.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.222654   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.222725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.223047   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.722782   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.723050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:20.723099   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:21.223085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.223561   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:21.723342   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.723426   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.723759   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.223473   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.722720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.723089   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:22.723144   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:23.222822   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.222899   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.223255   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:23.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.722930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.722672   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.723136   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:24.723192   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:25.222510   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:25.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.223082   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.223153   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.223523   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.723172   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.723245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.723542   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:26.723585   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:27.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.223474   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.223854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:27.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.722624   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.223564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.722696   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.723057   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:29.222647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.223145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:29.223197   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:29.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.222740   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:31.222877   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.223216   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:31.223257   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:31.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.722986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.222702   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.222778   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.222731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.722763   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.722837   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.723186   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:33.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:34.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:34.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.222597   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.223031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.722870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:36.223103   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.223184   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.223557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:36.223614   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:36.723236   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.723314   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.723677   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.223456   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.223536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.223814   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.722521   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.222667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.222743   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.722867   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.722943   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.723253   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:38.723310   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:39.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:39.722686   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.723127   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.222805   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.222893   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.223247   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:41.223068   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.223147   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.223511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:41.223567   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:41.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.723402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.723663   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.223489   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.223566   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.223933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.723031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.222740   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.222816   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.223098   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.722622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:43.723044   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:44.222550   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:44.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.222681   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.222768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.223254   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.723085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.723156   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.723536   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:45.723592   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:46.223392   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.223709   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:46.722472   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.722550   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.722572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:48.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.222994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:48.223050   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:48.722729   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.722814   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.723224   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.222841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.222560   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.222640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.722647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.723039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:50.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:51.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.222961   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:51.723095   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.723527   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.223293   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.223365   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.223638   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.723480   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.723556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.723872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:52.723957   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:53.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:53.722667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.722737   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.222637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:55.223524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.223593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.223922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:55.223979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:55.722631   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.722706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.723040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.223219   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.223289   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.223644   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.723321   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.723712   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.223501   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.223578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.223899   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.722570   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.722944   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:57.722991   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:58.222513   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.222843   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:58.722524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.722929   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.722692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.723017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:59.723091   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:00.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:00.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.722961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.222893   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.222975   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.223252   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:02.222554   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.222634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.222965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:02.223025   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:02.722664   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.722731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.222669   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.722717   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.222582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.222867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:04.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:05.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.222660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.223001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:05.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.722529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.722796   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.223039   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.223118   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.223488   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.723240   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.723313   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.723661   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:06.723717   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:07.223477   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.223559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.223842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:07.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.223018   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.722509   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:09.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.222633   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:09.223037   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:09.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.723150   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.222456   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.222522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:11.222767   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.222843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:11.223242   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:11.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.222662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.722612   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.722687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.723066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.222774   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.222844   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.722797   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.722874   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.723220   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:13.723278   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:14.222938   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.223011   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.223370   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:14.723151   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.723218   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.723511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.223282   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.223353   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.223716   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.723508   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.723596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.723933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:15.723988   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:16.223075   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.223148   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.223467   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:16.723393   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.723870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.222594   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.222670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.222997   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.722611   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:18.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.222665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:18.223068   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:18.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.222898   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.722641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.722493   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.722564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.722881   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:20.722931   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:21.222827   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.222898   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.223270   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:21.722982   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.723059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.723422   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.223208   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.223282   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.223570   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.723366   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.723481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.723885   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:22.723946   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:23.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:23.722590   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.722671   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.723025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:25.223036   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:25.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.722630   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.722980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.223051   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.223127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.223495   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.723130   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.723210   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.723481   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:27.223253   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.223710   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:27.223764   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:27.723405   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.723490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.723850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.722973   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.222678   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.222749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.722594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.722851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:29.722889   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:30.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.222697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.223066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:30.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.222962   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.223058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.223464   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.723093   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.723169   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.723529   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:31.723588   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:32.223231   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.223306   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.223675   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:32.723451   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.723529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.723844   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.722693   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.722771   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.723118   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:34.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.222571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.222833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:34.222873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:34.722546   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.723016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.222749   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.223165   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.722851   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.722928   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.723193   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:36.223352   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.223828   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:36.223884   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:36.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.222713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.223007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.222795   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.223113   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.722765   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.722845   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.723210   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:38.723261   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:39.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:39.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.722951   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.222911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.722636   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.722713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:41.222845   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.222923   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.223258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:41.223312   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:41.722994   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.723058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.723414   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.223248   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.223346   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.223858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.722455   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.722526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.722872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:43.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.223489   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.223805   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:43.223855   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:43.722522   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.722966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.222629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:45.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.223076   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.223835   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:45.223976   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:45.722584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.223060   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.223131   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.223436   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.723272   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.723352   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.723748   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.222452   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.222527   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.722563   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:47.722953   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:48.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:48.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.723001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.222559   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.222906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:49.723031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:50.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.223020   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:50.723341   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.723685   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.223482   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.722933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:52.222677   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:52.223122   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:52.722795   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.722878   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.222957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.223005   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.722713   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.722788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.723170   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:54.723229   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:55.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.222912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:55.722613   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.223191   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.223262   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.223629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.723299   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.723703   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:56.723746   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:57.222463   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.222559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.222925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:57.722623   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.723053   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.222555   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.222627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.222882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:59.222596   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.222674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:59.223071   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:59.722703   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.722774   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.222680   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.722902   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.722974   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.723300   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:01.223304   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.223397   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.223655   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:01.223703   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:01.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.723563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.723888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.222658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.223040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.722720   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.722789   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.723094   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.222643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.722718   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.722800   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.723133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:03.723188   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:04.222478   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.222547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.222820   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:04.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.222941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.722915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:06.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.223136   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.223522   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:06.223575   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:06.723193   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.723275   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.723670   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.223474   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.223549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.223817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.222651   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.222735   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.722864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:08.723216   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:09.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:09.722671   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.722749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.723103   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.222760   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.222832   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.223102   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.722631   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.722988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:11.222770   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.222841   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.223177   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:11.223230   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:11.723479   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.723562   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.222547   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.222623   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.222981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.722772   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.723109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:13.723015   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:14.222726   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.222802   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:14.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.722912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.222538   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.722981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:16.222929   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.223005   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:16.223275   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:16.223314   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:16.723228   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.723311   48683 node_ready.go:38] duration metric: took 6m0.000967258s for node "functional-090986" to be "Ready" ...
	I1206 08:53:16.726672   48683 out.go:203] 
	W1206 08:53:16.729718   48683 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 08:53:16.729749   48683 out.go:285] * 
	W1206 08:53:16.732326   48683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:53:16.735459   48683 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685515413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685539175Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685585510Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685633936Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685656131Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685667889Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685677202Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685694884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685715749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685747626Z" level=info msg="Connect containerd service"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.686095638Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.686870939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.705986055Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.706051458Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.706080389Z" level=info msg="Start subscribing containerd event"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.706125444Z" level=info msg="Start recovering state"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748802723Z" level=info msg="Start event monitor"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748865804Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748876331Z" level=info msg="Start streaming server"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748885316Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748893349Z" level=info msg="runtime interface starting up..."
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748899814Z" level=info msg="starting plugins..."
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748911703Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.749245635Z" level=info msg="containerd successfully booted in 0.086829s"
	Dec 06 08:47:13 functional-090986 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:53:18.722761    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:18.723230    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:18.724943    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:18.725762    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:18.727681    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> kernel <==
	 08:53:18 up 35 min,  0 user,  load average: 0.11, 0.23, 0.53
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 08:53:15 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:16 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 06 08:53:16 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:16 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:16 functional-090986 kubelet[8364]: E1206 08:53:16.265663    8364 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:16 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:16 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:16 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 06 08:53:16 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:16 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:17 functional-090986 kubelet[8370]: E1206 08:53:17.038788    8370 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:17 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:17 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:17 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 06 08:53:17 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:17 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:17 functional-090986 kubelet[8389]: E1206 08:53:17.793108    8389 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:17 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:17 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:18 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 06 08:53:18 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:18 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:18 functional-090986 kubelet[8432]: E1206 08:53:18.534878    8432 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:18 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:18 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (389.975759ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-090986 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-090986 get po -A: exit status 1 (57.492083ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-090986 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-090986 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-090986 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (340.154822ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-181746 addons list                                                                                                                           │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ addons         │ functional-181746 addons list -o json                                                                                                                   │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service hello-node-connect --url                                                                                                      │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ start          │ -p functional-181746 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                   │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ service        │ functional-181746 service list                                                                                                                          │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-181746 --alsologtostderr -v=1                                                                                          │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service list -o json                                                                                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service --namespace=default --https --url hello-node                                                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service hello-node --url --format={{.IP}}                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ service        │ functional-181746 service hello-node --url                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format short --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format yaml --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ ssh            │ functional-181746 ssh pgrep buildkitd                                                                                                                   │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ image          │ functional-181746 image build -t localhost/my-image:functional-181746 testdata/build --alsologtostderr                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format json --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format table --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls                                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ delete         │ -p functional-181746                                                                                                                                    │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ start          │ -p functional-090986 --alsologtostderr -v=8                                                                                                             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:47 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:47:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:47:11.094911   48683 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:47:11.095050   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095060   48683 out.go:374] Setting ErrFile to fd 2...
	I1206 08:47:11.095065   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095329   48683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:47:11.095763   48683 out.go:368] Setting JSON to false
	I1206 08:47:11.096588   48683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1782,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:47:11.096668   48683 start.go:143] virtualization:  
	I1206 08:47:11.100026   48683 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:47:11.103775   48683 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:47:11.103977   48683 notify.go:221] Checking for updates...
	I1206 08:47:11.109719   48683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:47:11.112668   48683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:11.115549   48683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:47:11.118516   48683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:47:11.121495   48683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:47:11.124961   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:11.125074   48683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:47:11.149854   48683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:47:11.149988   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.212959   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.203697623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.213084   48683 docker.go:319] overlay module found
	I1206 08:47:11.216243   48683 out.go:179] * Using the docker driver based on existing profile
	I1206 08:47:11.219285   48683 start.go:309] selected driver: docker
	I1206 08:47:11.219311   48683 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.219451   48683 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:47:11.219560   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.284944   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.27604915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.285369   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:11.285438   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:11.285486   48683 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.289257   48683 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:47:11.292082   48683 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:47:11.295206   48683 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:47:11.298095   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:11.298152   48683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:47:11.298166   48683 cache.go:65] Caching tarball of preloaded images
	I1206 08:47:11.298170   48683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:47:11.298253   48683 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:47:11.298264   48683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:47:11.298374   48683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:47:11.317301   48683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:47:11.317323   48683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:47:11.317345   48683 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:47:11.317377   48683 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:47:11.317445   48683 start.go:364] duration metric: took 50.847µs to acquireMachinesLock for "functional-090986"
	I1206 08:47:11.317466   48683 start.go:96] Skipping create...Using existing machine configuration
	I1206 08:47:11.317471   48683 fix.go:54] fixHost starting: 
	I1206 08:47:11.317772   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:11.334567   48683 fix.go:112] recreateIfNeeded on functional-090986: state=Running err=<nil>
	W1206 08:47:11.334595   48683 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 08:47:11.337684   48683 out.go:252] * Updating the running docker "functional-090986" container ...
	I1206 08:47:11.337717   48683 machine.go:94] provisionDockerMachine start ...
	I1206 08:47:11.337795   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.354534   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.354869   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.354883   48683 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:47:11.507058   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.507088   48683 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:47:11.507161   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.525196   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.525520   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.525537   48683 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:47:11.684471   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.684556   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.702187   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.702515   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.702540   48683 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:47:11.859622   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:47:11.859650   48683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:47:11.859671   48683 ubuntu.go:190] setting up certificates
	I1206 08:47:11.859680   48683 provision.go:84] configureAuth start
	I1206 08:47:11.859747   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:11.877706   48683 provision.go:143] copyHostCerts
	I1206 08:47:11.877750   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877787   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:47:11.877800   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877873   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:47:11.877976   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.877997   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:47:11.878007   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.878035   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:47:11.878088   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878108   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:47:11.878114   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878140   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:47:11.878192   48683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:47:12.018564   48683 provision.go:177] copyRemoteCerts
	I1206 08:47:12.018632   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:47:12.018672   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.036577   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.143156   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 08:47:12.143226   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:47:12.160243   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 08:47:12.160303   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 08:47:12.177568   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 08:47:12.177628   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:47:12.194504   48683 provision.go:87] duration metric: took 334.802128ms to configureAuth
	I1206 08:47:12.194543   48683 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:47:12.194717   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:12.194725   48683 machine.go:97] duration metric: took 857.000255ms to provisionDockerMachine
	I1206 08:47:12.194732   48683 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:47:12.194743   48683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:47:12.194796   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:47:12.194842   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.212073   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.315270   48683 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:47:12.318678   48683 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 08:47:12.318701   48683 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 08:47:12.318706   48683 command_runner.go:130] > VERSION_ID="12"
	I1206 08:47:12.318711   48683 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 08:47:12.318717   48683 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 08:47:12.318720   48683 command_runner.go:130] > ID=debian
	I1206 08:47:12.318724   48683 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 08:47:12.318730   48683 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 08:47:12.318735   48683 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 08:47:12.318975   48683 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:47:12.319002   48683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:47:12.319013   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:47:12.319072   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:47:12.319161   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:47:12.319172   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /etc/ssl/certs/42922.pem
	I1206 08:47:12.319246   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:47:12.319253   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> /etc/test/nested/copy/4292/hosts
	I1206 08:47:12.319298   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:47:12.327031   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:12.344679   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:47:12.363077   48683 start.go:296] duration metric: took 168.329595ms for postStartSetup
	I1206 08:47:12.363152   48683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:47:12.363210   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.380353   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.487060   48683 command_runner.go:130] > 11%
	I1206 08:47:12.487699   48683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:47:12.493338   48683 command_runner.go:130] > 174G
	I1206 08:47:12.494716   48683 fix.go:56] duration metric: took 1.177238165s for fixHost
	I1206 08:47:12.494741   48683 start.go:83] releasing machines lock for "functional-090986", held for 1.177286419s
	I1206 08:47:12.494813   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:12.512960   48683 ssh_runner.go:195] Run: cat /version.json
	I1206 08:47:12.513022   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.513272   48683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:47:12.513331   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.541090   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.554766   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.647127   48683 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 08:47:12.647264   48683 ssh_runner.go:195] Run: systemctl --version
	I1206 08:47:12.750867   48683 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 08:47:12.751021   48683 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 08:47:12.751059   48683 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 08:47:12.751151   48683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 08:47:12.755609   48683 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 08:47:12.756103   48683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:47:12.756176   48683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:47:12.764393   48683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 08:47:12.764420   48683 start.go:496] detecting cgroup driver to use...
	I1206 08:47:12.764452   48683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:47:12.764507   48683 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:47:12.779951   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:47:12.793243   48683 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:47:12.793324   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:47:12.809005   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:47:12.823043   48683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:47:12.939696   48683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:47:13.060632   48683 docker.go:234] disabling docker service ...
	I1206 08:47:13.060721   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:47:13.078332   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:47:13.093719   48683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:47:13.229319   48683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:47:13.368814   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:47:13.381432   48683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:47:13.395011   48683 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 08:47:13.396419   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:47:13.405770   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:47:13.415310   48683 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:47:13.415505   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:47:13.424963   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.433399   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:47:13.442072   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.450816   48683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:47:13.458824   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:47:13.467776   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:47:13.477145   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:47:13.486457   48683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:47:13.493910   48683 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 08:47:13.494986   48683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:47:13.503356   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:13.622996   48683 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:47:13.753042   48683 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:47:13.753133   48683 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:47:13.757647   48683 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 08:47:13.757672   48683 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 08:47:13.757681   48683 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1206 08:47:13.757689   48683 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:13.757724   48683 command_runner.go:130] > Access: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757736   48683 command_runner.go:130] > Modify: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757742   48683 command_runner.go:130] > Change: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757746   48683 command_runner.go:130] >  Birth: -
	I1206 08:47:13.757803   48683 start.go:564] Will wait 60s for crictl version
	I1206 08:47:13.757883   48683 ssh_runner.go:195] Run: which crictl
	I1206 08:47:13.761846   48683 command_runner.go:130] > /usr/local/bin/crictl
	I1206 08:47:13.761974   48683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:47:13.786269   48683 command_runner.go:130] > Version:  0.1.0
	I1206 08:47:13.786289   48683 command_runner.go:130] > RuntimeName:  containerd
	I1206 08:47:13.786295   48683 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 08:47:13.786302   48683 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 08:47:13.788604   48683 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:47:13.788708   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.809864   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.811926   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.831700   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.839817   48683 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:47:13.842721   48683 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:47:13.858999   48683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:47:13.862710   48683 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 08:47:13.862939   48683 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:47:13.863057   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:13.863132   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.889556   48683 command_runner.go:130] > {
	I1206 08:47:13.889580   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.889586   48683 command_runner.go:130] >     {
	I1206 08:47:13.889601   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.889607   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889612   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.889616   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889619   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889628   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.889635   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889640   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.889652   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889657   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889661   48683 command_runner.go:130] >     },
	I1206 08:47:13.889664   48683 command_runner.go:130] >     {
	I1206 08:47:13.889672   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.889676   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889681   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.889687   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889691   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889707   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.889710   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889715   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.889725   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889729   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889733   48683 command_runner.go:130] >     },
	I1206 08:47:13.889736   48683 command_runner.go:130] >     {
	I1206 08:47:13.889743   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.889752   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889767   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.889770   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889777   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889785   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.889792   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889796   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.889800   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.889808   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889815   48683 command_runner.go:130] >     },
	I1206 08:47:13.889818   48683 command_runner.go:130] >     {
	I1206 08:47:13.889825   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.889829   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889837   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.889841   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889844   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889852   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.889863   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889867   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.889871   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889875   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889885   48683 command_runner.go:130] >       },
	I1206 08:47:13.889889   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889892   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889896   48683 command_runner.go:130] >     },
	I1206 08:47:13.889899   48683 command_runner.go:130] >     {
	I1206 08:47:13.889906   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.889912   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889918   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.889920   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889925   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889933   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.889937   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889945   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.889949   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889960   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889964   48683 command_runner.go:130] >       },
	I1206 08:47:13.889970   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889975   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889987   48683 command_runner.go:130] >     },
	I1206 08:47:13.890022   48683 command_runner.go:130] >     {
	I1206 08:47:13.890033   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.890037   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890043   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.890049   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890054   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890064   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.890070   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890075   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.890078   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890082   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890087   48683 command_runner.go:130] >       },
	I1206 08:47:13.890092   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890098   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890102   48683 command_runner.go:130] >     },
	I1206 08:47:13.890105   48683 command_runner.go:130] >     {
	I1206 08:47:13.890112   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.890115   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890121   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.890124   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890139   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.890145   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890149   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.890153   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890156   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890159   48683 command_runner.go:130] >     },
	I1206 08:47:13.890170   48683 command_runner.go:130] >     {
	I1206 08:47:13.890177   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.890181   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890187   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.890190   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890206   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.890215   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890223   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.890228   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890231   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890235   48683 command_runner.go:130] >       },
	I1206 08:47:13.890239   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890250   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890254   48683 command_runner.go:130] >     },
	I1206 08:47:13.890257   48683 command_runner.go:130] >     {
	I1206 08:47:13.890264   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.890272   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890277   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.890280   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890284   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890291   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.890294   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890298   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.890305   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890310   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.890315   48683 command_runner.go:130] >       },
	I1206 08:47:13.890319   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890331   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.890335   48683 command_runner.go:130] >     }
	I1206 08:47:13.890337   48683 command_runner.go:130] >   ]
	I1206 08:47:13.890340   48683 command_runner.go:130] > }
	I1206 08:47:13.892630   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.892653   48683 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:47:13.892734   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.915064   48683 command_runner.go:130] > {
	I1206 08:47:13.915085   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.915091   48683 command_runner.go:130] >     {
	I1206 08:47:13.915102   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.915109   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915115   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.915119   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915142   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.915149   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915153   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.915157   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915161   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915164   48683 command_runner.go:130] >     },
	I1206 08:47:13.915167   48683 command_runner.go:130] >     {
	I1206 08:47:13.915178   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.915184   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915189   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.915193   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915208   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.915214   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915218   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.915222   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915225   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915228   48683 command_runner.go:130] >     },
	I1206 08:47:13.915231   48683 command_runner.go:130] >     {
	I1206 08:47:13.915238   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.915245   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915251   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.915254   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915262   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915270   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.915275   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915279   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.915286   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.915291   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915295   48683 command_runner.go:130] >     },
	I1206 08:47:13.915298   48683 command_runner.go:130] >     {
	I1206 08:47:13.915305   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.915311   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915320   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.915324   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915328   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915338   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.915341   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915345   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.915349   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915352   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915359   48683 command_runner.go:130] >       },
	I1206 08:47:13.915363   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915410   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915414   48683 command_runner.go:130] >     },
	I1206 08:47:13.915418   48683 command_runner.go:130] >     {
	I1206 08:47:13.915424   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.915428   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915434   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.915437   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915441   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915448   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.915451   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915455   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.915458   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915471   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915474   48683 command_runner.go:130] >       },
	I1206 08:47:13.915478   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915481   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915484   48683 command_runner.go:130] >     },
	I1206 08:47:13.915487   48683 command_runner.go:130] >     {
	I1206 08:47:13.915494   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.915497   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915503   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.915506   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915509   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915523   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.915526   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915530   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.915534   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915540   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915543   48683 command_runner.go:130] >       },
	I1206 08:47:13.915547   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915550   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915553   48683 command_runner.go:130] >     },
	I1206 08:47:13.915556   48683 command_runner.go:130] >     {
	I1206 08:47:13.915563   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.915580   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915585   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.915588   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915592   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915601   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.915608   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915612   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.915616   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915620   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915622   48683 command_runner.go:130] >     },
	I1206 08:47:13.915626   48683 command_runner.go:130] >     {
	I1206 08:47:13.915635   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.915649   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915655   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.915658   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915662   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915670   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.915676   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915680   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.915684   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915687   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915691   48683 command_runner.go:130] >       },
	I1206 08:47:13.915699   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915706   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915710   48683 command_runner.go:130] >     },
	I1206 08:47:13.915713   48683 command_runner.go:130] >     {
	I1206 08:47:13.915720   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.915723   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915728   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.915731   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915735   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915746   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.915752   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915756   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.915760   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915764   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.915777   48683 command_runner.go:130] >       },
	I1206 08:47:13.915781   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915785   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.915790   48683 command_runner.go:130] >     }
	I1206 08:47:13.915793   48683 command_runner.go:130] >   ]
	I1206 08:47:13.915796   48683 command_runner.go:130] > }
	I1206 08:47:13.917976   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.917998   48683 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:47:13.918006   48683 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:47:13.918108   48683 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:47:13.918181   48683 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:47:13.946472   48683 command_runner.go:130] > {
	I1206 08:47:13.946489   48683 command_runner.go:130] >   "cniconfig": {
	I1206 08:47:13.946494   48683 command_runner.go:130] >     "Networks": [
	I1206 08:47:13.946497   48683 command_runner.go:130] >       {
	I1206 08:47:13.946502   48683 command_runner.go:130] >         "Config": {
	I1206 08:47:13.946507   48683 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 08:47:13.946512   48683 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 08:47:13.946516   48683 command_runner.go:130] >           "Plugins": [
	I1206 08:47:13.946520   48683 command_runner.go:130] >             {
	I1206 08:47:13.946524   48683 command_runner.go:130] >               "Network": {
	I1206 08:47:13.946529   48683 command_runner.go:130] >                 "ipam": {},
	I1206 08:47:13.946537   48683 command_runner.go:130] >                 "type": "loopback"
	I1206 08:47:13.946541   48683 command_runner.go:130] >               },
	I1206 08:47:13.946554   48683 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 08:47:13.946558   48683 command_runner.go:130] >             }
	I1206 08:47:13.946561   48683 command_runner.go:130] >           ],
	I1206 08:47:13.946573   48683 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 08:47:13.946581   48683 command_runner.go:130] >         },
	I1206 08:47:13.946586   48683 command_runner.go:130] >         "IFName": "lo"
	I1206 08:47:13.946590   48683 command_runner.go:130] >       }
	I1206 08:47:13.946593   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946597   48683 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 08:47:13.946601   48683 command_runner.go:130] >     "PluginDirs": [
	I1206 08:47:13.946605   48683 command_runner.go:130] >       "/opt/cni/bin"
	I1206 08:47:13.946609   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946613   48683 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 08:47:13.946617   48683 command_runner.go:130] >     "Prefix": "eth"
	I1206 08:47:13.946620   48683 command_runner.go:130] >   },
	I1206 08:47:13.946623   48683 command_runner.go:130] >   "config": {
	I1206 08:47:13.946627   48683 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 08:47:13.946630   48683 command_runner.go:130] >       "/etc/cdi",
	I1206 08:47:13.946636   48683 command_runner.go:130] >       "/var/run/cdi"
	I1206 08:47:13.946640   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946643   48683 command_runner.go:130] >     "cni": {
	I1206 08:47:13.946646   48683 command_runner.go:130] >       "binDir": "",
	I1206 08:47:13.946650   48683 command_runner.go:130] >       "binDirs": [
	I1206 08:47:13.946653   48683 command_runner.go:130] >         "/opt/cni/bin"
	I1206 08:47:13.946656   48683 command_runner.go:130] >       ],
	I1206 08:47:13.946661   48683 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 08:47:13.946665   48683 command_runner.go:130] >       "confTemplate": "",
	I1206 08:47:13.946668   48683 command_runner.go:130] >       "ipPref": "",
	I1206 08:47:13.946672   48683 command_runner.go:130] >       "maxConfNum": 1,
	I1206 08:47:13.946676   48683 command_runner.go:130] >       "setupSerially": false,
	I1206 08:47:13.946680   48683 command_runner.go:130] >       "useInternalLoopback": false
	I1206 08:47:13.946683   48683 command_runner.go:130] >     },
	I1206 08:47:13.946688   48683 command_runner.go:130] >     "containerd": {
	I1206 08:47:13.946696   48683 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 08:47:13.946701   48683 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 08:47:13.946706   48683 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 08:47:13.946710   48683 command_runner.go:130] >       "runtimes": {
	I1206 08:47:13.946713   48683 command_runner.go:130] >         "runc": {
	I1206 08:47:13.946718   48683 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 08:47:13.946722   48683 command_runner.go:130] >           "PodAnnotations": null,
	I1206 08:47:13.946728   48683 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 08:47:13.946733   48683 command_runner.go:130] >           "cgroupWritable": false,
	I1206 08:47:13.946738   48683 command_runner.go:130] >           "cniConfDir": "",
	I1206 08:47:13.946742   48683 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 08:47:13.946745   48683 command_runner.go:130] >           "io_type": "",
	I1206 08:47:13.946748   48683 command_runner.go:130] >           "options": {
	I1206 08:47:13.946752   48683 command_runner.go:130] >             "BinaryName": "",
	I1206 08:47:13.946756   48683 command_runner.go:130] >             "CriuImagePath": "",
	I1206 08:47:13.946761   48683 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 08:47:13.946764   48683 command_runner.go:130] >             "IoGid": 0,
	I1206 08:47:13.946768   48683 command_runner.go:130] >             "IoUid": 0,
	I1206 08:47:13.946772   48683 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 08:47:13.946776   48683 command_runner.go:130] >             "Root": "",
	I1206 08:47:13.946780   48683 command_runner.go:130] >             "ShimCgroup": "",
	I1206 08:47:13.946784   48683 command_runner.go:130] >             "SystemdCgroup": false
	I1206 08:47:13.946787   48683 command_runner.go:130] >           },
	I1206 08:47:13.946793   48683 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 08:47:13.946799   48683 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 08:47:13.946803   48683 command_runner.go:130] >           "runtimePath": "",
	I1206 08:47:13.946808   48683 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 08:47:13.946812   48683 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 08:47:13.946816   48683 command_runner.go:130] >           "snapshotter": ""
	I1206 08:47:13.946820   48683 command_runner.go:130] >         }
	I1206 08:47:13.946823   48683 command_runner.go:130] >       }
	I1206 08:47:13.946826   48683 command_runner.go:130] >     },
	I1206 08:47:13.946836   48683 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 08:47:13.946848   48683 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 08:47:13.946854   48683 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 08:47:13.946858   48683 command_runner.go:130] >     "disableApparmor": false,
	I1206 08:47:13.946863   48683 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 08:47:13.946867   48683 command_runner.go:130] >     "disableProcMount": false,
	I1206 08:47:13.946871   48683 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 08:47:13.946874   48683 command_runner.go:130] >     "enableCDI": true,
	I1206 08:47:13.946878   48683 command_runner.go:130] >     "enableSelinux": false,
	I1206 08:47:13.946883   48683 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 08:47:13.946887   48683 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 08:47:13.946891   48683 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 08:47:13.946896   48683 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 08:47:13.946900   48683 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 08:47:13.946905   48683 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 08:47:13.946909   48683 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 08:47:13.946917   48683 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946922   48683 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 08:47:13.946928   48683 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946932   48683 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 08:47:13.946937   48683 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 08:47:13.946940   48683 command_runner.go:130] >   },
	I1206 08:47:13.946943   48683 command_runner.go:130] >   "features": {
	I1206 08:47:13.946948   48683 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 08:47:13.946951   48683 command_runner.go:130] >   },
	I1206 08:47:13.946955   48683 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 08:47:13.946964   48683 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946974   48683 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946977   48683 command_runner.go:130] >   "runtimeHandlers": [
	I1206 08:47:13.946980   48683 command_runner.go:130] >     {
	I1206 08:47:13.946984   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.946988   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.946992   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.946996   48683 command_runner.go:130] >       }
	I1206 08:47:13.947002   48683 command_runner.go:130] >     },
	I1206 08:47:13.947006   48683 command_runner.go:130] >     {
	I1206 08:47:13.947009   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.947015   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.947019   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.947022   48683 command_runner.go:130] >       },
	I1206 08:47:13.947026   48683 command_runner.go:130] >       "name": "runc"
	I1206 08:47:13.947029   48683 command_runner.go:130] >     }
	I1206 08:47:13.947032   48683 command_runner.go:130] >   ],
	I1206 08:47:13.947035   48683 command_runner.go:130] >   "status": {
	I1206 08:47:13.947039   48683 command_runner.go:130] >     "conditions": [
	I1206 08:47:13.947042   48683 command_runner.go:130] >       {
	I1206 08:47:13.947046   48683 command_runner.go:130] >         "message": "",
	I1206 08:47:13.947050   48683 command_runner.go:130] >         "reason": "",
	I1206 08:47:13.947053   48683 command_runner.go:130] >         "status": true,
	I1206 08:47:13.947059   48683 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 08:47:13.947062   48683 command_runner.go:130] >       },
	I1206 08:47:13.947065   48683 command_runner.go:130] >       {
	I1206 08:47:13.947072   48683 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 08:47:13.947081   48683 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 08:47:13.947085   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947089   48683 command_runner.go:130] >         "type": "NetworkReady"
	I1206 08:47:13.947091   48683 command_runner.go:130] >       },
	I1206 08:47:13.947094   48683 command_runner.go:130] >       {
	I1206 08:47:13.947118   48683 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 08:47:13.947123   48683 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 08:47:13.947129   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947134   48683 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 08:47:13.947137   48683 command_runner.go:130] >       }
	I1206 08:47:13.947139   48683 command_runner.go:130] >     ]
	I1206 08:47:13.947142   48683 command_runner.go:130] >   }
	I1206 08:47:13.947144   48683 command_runner.go:130] > }
	I1206 08:47:13.947502   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:13.947519   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:13.947541   48683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:47:13.947564   48683 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:47:13.947673   48683 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:47:13.947742   48683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:47:13.955523   48683 command_runner.go:130] > kubeadm
	I1206 08:47:13.955542   48683 command_runner.go:130] > kubectl
	I1206 08:47:13.955546   48683 command_runner.go:130] > kubelet
	I1206 08:47:13.955560   48683 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:47:13.955622   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:47:13.963242   48683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:47:13.976514   48683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:47:13.994365   48683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 08:47:14.008131   48683 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:47:14.012074   48683 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 08:47:14.012170   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:14.162349   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:14.970935   48683 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:47:14.971004   48683 certs.go:195] generating shared ca certs ...
	I1206 08:47:14.971035   48683 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:14.971212   48683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:47:14.971308   48683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:47:14.971340   48683 certs.go:257] generating profile certs ...
	I1206 08:47:14.971529   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:47:14.971755   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:47:14.971844   48683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:47:14.971869   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 08:47:14.971914   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 08:47:14.971945   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 08:47:14.971989   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 08:47:14.972021   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 08:47:14.972053   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 08:47:14.972085   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 08:47:14.972115   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 08:47:14.972198   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:47:14.972259   48683 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:47:14.972284   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:47:14.972342   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:47:14.972394   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:47:14.972452   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:47:14.972528   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:14.972579   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:14.972619   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem -> /usr/share/ca-certificates/4292.pem
	I1206 08:47:14.972659   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /usr/share/ca-certificates/42922.pem
	I1206 08:47:14.973224   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:47:14.995297   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:47:15.042161   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:47:15.062885   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:47:15.082018   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:47:15.101436   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:47:15.120061   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:47:15.140257   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:47:15.160107   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:47:15.178980   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:47:15.197893   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:47:15.216224   48683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:47:15.229330   48683 ssh_runner.go:195] Run: openssl version
	I1206 08:47:15.235331   48683 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 08:47:15.235817   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.243429   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:47:15.250764   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254643   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254673   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254723   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.295906   48683 command_runner.go:130] > b5213941
	I1206 08:47:15.295990   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:47:15.303441   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.310784   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:47:15.318504   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322051   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322380   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322461   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.363237   48683 command_runner.go:130] > 51391683
	I1206 08:47:15.363703   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:47:15.371299   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.378918   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:47:15.386367   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390281   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390354   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390410   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.431004   48683 command_runner.go:130] > 3ec20f2e
	I1206 08:47:15.431441   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:47:15.439072   48683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442819   48683 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442856   48683 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 08:47:15.442863   48683 command_runner.go:130] > Device: 259,1	Inode: 1055659     Links: 1
	I1206 08:47:15.442870   48683 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:15.442877   48683 command_runner.go:130] > Access: 2025-12-06 08:43:07.824678266 +0000
	I1206 08:47:15.442882   48683 command_runner.go:130] > Modify: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442890   48683 command_runner.go:130] > Change: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442895   48683 command_runner.go:130] >  Birth: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442956   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 08:47:15.483144   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.483601   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 08:47:15.524376   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.524527   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 08:47:15.567333   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.567897   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 08:47:15.609722   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.610195   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 08:47:15.652939   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.653458   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 08:47:15.694815   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.695278   48683 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:15.695370   48683 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:47:15.695465   48683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:47:15.724990   48683 cri.go:89] found id: ""
	I1206 08:47:15.725064   48683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:47:15.732181   48683 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 08:47:15.732210   48683 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 08:47:15.732217   48683 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 08:47:15.733102   48683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 08:47:15.733116   48683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 08:47:15.733169   48683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 08:47:15.740768   48683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:47:15.741168   48683 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-090986" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.741273   48683 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "functional-090986" cluster setting kubeconfig missing "functional-090986" context setting]
	I1206 08:47:15.741558   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.741975   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.742128   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.742650   48683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 08:47:15.742669   48683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 08:47:15.742675   48683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 08:47:15.742680   48683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 08:47:15.742685   48683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 08:47:15.742976   48683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 08:47:15.743070   48683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 08:47:15.750828   48683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 08:47:15.750861   48683 kubeadm.go:602] duration metric: took 17.739612ms to restartPrimaryControlPlane
	I1206 08:47:15.750871   48683 kubeadm.go:403] duration metric: took 55.600148ms to StartCluster
	I1206 08:47:15.750890   48683 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.750966   48683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.751639   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.751842   48683 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 08:47:15.752180   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:15.752232   48683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 08:47:15.752302   48683 addons.go:70] Setting storage-provisioner=true in profile "functional-090986"
	I1206 08:47:15.752319   48683 addons.go:239] Setting addon storage-provisioner=true in "functional-090986"
	I1206 08:47:15.752322   48683 addons.go:70] Setting default-storageclass=true in profile "functional-090986"
	I1206 08:47:15.752340   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.752341   48683 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-090986"
	I1206 08:47:15.752637   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.752784   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.759188   48683 out.go:179] * Verifying Kubernetes components...
	I1206 08:47:15.762058   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:15.783651   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.783826   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.785192   48683 addons.go:239] Setting addon default-storageclass=true in "functional-090986"
	I1206 08:47:15.785238   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.785700   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.797451   48683 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 08:47:15.800625   48683 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:15.800648   48683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 08:47:15.800725   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.810048   48683 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:15.810080   48683 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 08:47:15.810147   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.824818   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.853374   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.963935   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:15.994167   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.016409   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:16.722308   48683 node_ready.go:35] waiting up to 6m0s for node "functional-090986" to be "Ready" ...
	I1206 08:47:16.722441   48683 type.go:168] "Request Body" body=""
	I1206 08:47:16.722509   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:16.722791   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:16.722979   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722997   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723021   48683 retry.go:31] will retry after 246.599259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722932   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723088   48683 retry.go:31] will retry after 155.728524ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.879530   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.938491   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.942697   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.942739   48683 retry.go:31] will retry after 198.095926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.969843   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.032387   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.037081   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.037167   48683 retry.go:31] will retry after 340.655262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.141488   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:17.200483   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.200581   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.200607   48683 retry.go:31] will retry after 823.921965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:17.378343   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.437909   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.437949   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.437997   48683 retry.go:31] will retry after 597.373907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.723431   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.723506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.723862   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.025532   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:18.036222   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:18.102548   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.106195   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.106289   48683 retry.go:31] will retry after 988.595122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128444   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.128537   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128579   48683 retry.go:31] will retry after 1.22957213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.222734   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.222810   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.223190   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.722737   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.722827   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.723191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:18.723277   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:19.095767   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:19.151460   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.155168   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.155201   48683 retry.go:31] will retry after 1.717558752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.223503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.223595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.223937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:19.358372   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:19.411770   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.415269   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.415303   48683 retry.go:31] will retry after 781.287082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.722942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.197734   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:20.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.223196   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.223547   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.262283   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.262363   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.262407   48683 retry.go:31] will retry after 1.829414459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.723284   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:20.723338   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:20.873661   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:20.932799   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.936985   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.937020   48683 retry.go:31] will retry after 2.554499586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:21.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.223934   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:21.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.722674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.723048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.092657   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:22.149785   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:22.153326   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.153368   48683 retry.go:31] will retry after 2.084938041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.222743   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.722901   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.722987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.723330   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:22.723402   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:23.223196   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.223285   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.223660   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:23.492173   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:23.557652   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:23.557715   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.557741   48683 retry.go:31] will retry after 4.19827742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.723091   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.723482   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.223263   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.223339   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.223623   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.238906   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:24.307275   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:24.307320   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.307339   48683 retry.go:31] will retry after 4.494270685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.722877   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.723244   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:25.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.223006   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.223365   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:25.223455   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:25.723213   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.723596   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.223491   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.223588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.722621   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.722699   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.222525   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.222628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.222892   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:27.723035   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:27.756528   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:27.814954   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:27.818792   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:27.818824   48683 retry.go:31] will retry after 5.399057422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.223490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.223811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.723414   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.723485   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.723794   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.802108   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:28.864913   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:28.864953   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.864972   48683 retry.go:31] will retry after 3.285056528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:29.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.223556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.223857   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:29.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.723030   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:29.723087   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:30.222650   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.222720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.223035   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:30.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.222982   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.223061   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.723202   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.723273   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.723614   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:31.723661   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:32.150291   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:32.207920   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:32.211781   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.211813   48683 retry.go:31] will retry after 10.805243336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.223541   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:32.723329   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.723744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.218182   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:33.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.222677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.295753   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:33.295946   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.295967   48683 retry.go:31] will retry after 9.227502372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:33.723973   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:34.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.222681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.223037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:34.723424   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.723499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.723811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.222963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.722678   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.723029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:36.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.223195   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.223476   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:36.223516   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:36.723305   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.723388   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.723674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.223484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.223557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.223866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.723315   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.723395   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.723693   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:38.223481   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.223887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:38.223937   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:38.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.723024   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.223350   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.223435   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.223711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.723507   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.723587   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.723926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.222602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.723573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.723901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:40.723952   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:41.222532   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.222606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:41.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.723083   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.222810   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.222891   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.223201   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.523700   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:42.586651   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:42.586695   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.586713   48683 retry.go:31] will retry after 12.2898811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.723024   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.723100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.723445   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:43.017838   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:43.079371   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:43.079435   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.079458   48683 retry.go:31] will retry after 19.494910144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.222603   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:43.223199   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:43.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.722697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.722959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.222614   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.722637   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.723067   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:45.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:45.223228   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:45.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.722969   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.223003   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.223089   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.223469   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.723273   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.723345   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.723681   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:47.223099   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.223167   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.223496   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:47.223542   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:47.723308   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.723392   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.223454   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.223802   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.722998   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.722484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.722561   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.722823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:49.722870   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:50.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.222659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:50.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.723086   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.222858   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.222936   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.723311   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.723634   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:51.723682   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:52.223453   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.223522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.223869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:52.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.722897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.222985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.722770   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.723108   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.222905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:54.222955   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:54.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.722660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.723065   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.877464   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:54.933804   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:54.937955   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:54.937987   48683 retry.go:31] will retry after 17.91075527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:55.223442   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.223852   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:55.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.722606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:56.222999   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.223070   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:56.223487   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:56.723218   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.723287   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.723646   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.223125   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.223203   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.223494   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.722995   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.723069   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.723443   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:58.223117   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.223189   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.223566   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:58.223620   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:58.723372   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.723711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.223465   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.223912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.722939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:00.247414   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.247503   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.247882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:00.247935   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:00.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.722938   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.222887   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.222999   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.223358   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.723162   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.723235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.723597   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.223493   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.223823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.575367   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:02.637904   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:02.637958   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.637977   48683 retry.go:31] will retry after 12.943468008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.723120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.723231   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.723512   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:02.723552   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:03.223325   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.223416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.223738   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:03.723412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.723492   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.222479   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.222557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.222823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.722559   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:05.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.222783   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:05.223222   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.722946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.223159   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.223264   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.223665   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.723461   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.723536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.723855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.222524   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.222592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.722594   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.722670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.723027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:07.723084   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:08.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.222686   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.223036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:08.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.222614   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:10.223441   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.223507   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.223798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:10.223853   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:10.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.723077   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.222928   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.223022   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.223407   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.723308   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.723611   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.223404   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.223497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:12.223876   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:12.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.849275   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:12.904952   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:12.908634   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:12.908667   48683 retry.go:31] will retry after 25.236445918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:13.223053   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.223119   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.223405   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:13.723248   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.723328   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.723664   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:14.223478   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.223874   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:14.223925   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:14.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.222959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.582577   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:15.646326   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:15.649856   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.649887   48683 retry.go:31] will retry after 20.09954841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.723221   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.723656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.222526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.222836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.723594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.723935   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:16.723996   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:17.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.222939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:17.722495   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.722573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.722891   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.722663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:19.222693   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.222763   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:19.223069   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:19.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.223016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.723476   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.723548   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.723815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:21.222793   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.222863   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.223194   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:21.223251   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:21.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.722963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.222954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.222759   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.722801   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.722870   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.723132   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:23.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:24.222563   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:24.722536   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.722956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.223202   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.223267   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.723347   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.723448   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:25.723881   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:26.222857   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.222930   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.223262   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:26.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.722570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.222528   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.222598   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.222916   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.722616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.722695   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.723043   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:28.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.222622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.222922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:28.222981   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:28.722606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.723095   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.722674   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.722742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:30.222789   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.222864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.223189   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:30.223256   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:30.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.223492   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.223567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.222773   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.223092   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.722517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.722582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.722842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:32.722882   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:33.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:33.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.722591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.223311   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.223394   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.223656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.723571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:34.723969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:35.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.222962   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.722532   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.722854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.750369   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:35.818338   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818385   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818494   48683 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:36.223177   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.223245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.223588   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:36.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.723369   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:37.223358   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.223441   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.223795   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:37.223851   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:37.723459   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.723923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.145414   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:38.206093   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210075   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210171   48683 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:38.213345   48683 out.go:179] * Enabled addons: 
	I1206 08:48:38.217127   48683 addons.go:530] duration metric: took 1m22.464883403s for enable addons: enabled=[]
	I1206 08:48:38.223238   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.223680   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.723466   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.723871   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.222501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.722607   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.723013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:39.723066   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:40.222676   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.222756   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:40.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.722649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.223120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.223622   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.723403   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.723475   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:41.723873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:42.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:42.722684   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.723129   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.222817   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.222915   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.223184   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:44.222598   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.223013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:44.223067   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:44.722714   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.222844   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.222932   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.223348   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.723174   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.723260   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.723605   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.222507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.222918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.722952   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:46.723007   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:47.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:47.722496   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.722563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.722826   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.222616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.722784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.723121   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:48.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:49.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.222915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:49.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.722727   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.723082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.222645   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.222761   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.223073   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.722569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:51.222952   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.223025   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.223425   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:51.223480   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:51.723105   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.723185   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.723538   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.223323   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.223409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.223689   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.722451   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.222606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.223017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.722735   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.722801   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.723122   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:53.723177   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:54.222846   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.222924   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.223260   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:54.722973   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.723056   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.723447   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.223281   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.223354   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.223701   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.723485   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.723577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.723911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:55.723962   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:56.222980   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.223059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.223408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:56.723182   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.723251   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.723637   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.223421   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.223498   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.722566   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:58.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.222866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:58.222905   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:58.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.722681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.723002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.222616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.222687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.722925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:00.222639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.222712   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:00.223060   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:00.722635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.723063   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.223028   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.223234   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.223616   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.723323   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.723798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:02.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.223571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:02.223997   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:02.722537   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.722919   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.222635   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.222942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.722533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.222483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.222897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.722446   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.722517   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:04.722879   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:05.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.222673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:05.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.723669   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.223552   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.223906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.722590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:06.722967   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:07.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.222610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.222868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:07.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.723006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.222578   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.222672   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.722492   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.722560   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:09.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:09.223046   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:09.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.222600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:11.222977   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.223048   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.224357   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1206 08:49:11.224412   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:11.722526   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.722867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.222693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.223079   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.722676   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.722753   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.723090   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.222608   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.722719   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.723062   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:13.723117   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:14.222784   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.222858   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.722588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.722847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.222870   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.222963   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.722751   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.722830   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:15.723220   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:16.223389   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.223501   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.223841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:16.723482   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.723936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.222580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.722456   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.722830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:18.222500   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.222575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:18.222970   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:18.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.722957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.223415   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.223481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.223744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.723518   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.723592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.723932   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:20.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.222604   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:20.223052   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:20.723464   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.723877   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.222834   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.222916   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.223278   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:22.222535   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:22.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:22.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.723051   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.222710   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:24.222593   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.222679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.223059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:24.223115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:24.722807   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.722887   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.723288   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.223044   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.223114   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.223419   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.723206   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.723645   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.222468   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.222888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.722538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.722868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:26.722924   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:27.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.222966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:27.722668   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.722745   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.723116   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.222806   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.222880   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.223155   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.723088   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:28.723155   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:29.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.222755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:29.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.722634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.722895   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.222645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.222996   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.722682   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.722768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.723166   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:30.723221   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:31.223009   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.223094   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.223410   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:31.723177   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.723629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.223466   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.722617   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.722984   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:33.222572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.222977   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:33.223031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:33.722723   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.722796   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.723147   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.222719   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.222791   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.223074   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.722746   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.722818   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.723175   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:35.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.222977   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.223336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:35.223421   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:35.723153   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.723223   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.723599   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.223510   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.223602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.223964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.222842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.722645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:37.723047   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:38.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.223119   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:38.722610   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.222653   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.223084   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.722817   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.723225   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:39.723274   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:40.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.223023   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:40.722742   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.722820   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.723169   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.222970   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.223060   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.723194   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.723270   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.723557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:41.723610   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:42.223426   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.223508   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.223855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:42.722579   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.722655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.723008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.222519   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.222591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.222864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.722618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.722917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:44.222612   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.223025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:44.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:44.722483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.722565   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:46.223140   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.223214   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:46.223591   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:46.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.723406   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.723743   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.222537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.722771   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.722869   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.723227   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:48.723289   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:49.222922   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.223256   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:49.723128   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.723204   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.723574   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.223491   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.223824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.722515   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.722856   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:51.223538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.223610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.223931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:51.223984   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:51.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.722574   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.222457   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.222799   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.222986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.723440   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.723514   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.723868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:53.723922   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:54.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.222982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:54.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.723007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.222545   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.222641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.222936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.723009   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:56.223162   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.223235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.223592   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:56.223647   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:56.723346   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.723440   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.223563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.224002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.722697   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.723097   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.222803   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.222876   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:58.723019   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:59.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:59.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.723878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.222804   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.223133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.722691   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.723059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:00.723115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:01.222895   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.223286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:01.722600   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.723014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.722815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:03.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.222573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.222946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:03.222994   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:03.722541   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.222608   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.222676   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.722602   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:05.222818   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.222895   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.223192   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:05.223237   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:05.722878   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.722947   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.723266   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.223357   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.223444   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.223770   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.722470   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.722567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.722904   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.222961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.722668   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.723032   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:07.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:08.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:08.722770   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.722843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.723145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.222860   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.722567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:10.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.222734   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.223056   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:10.223102   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:10.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.722688   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.723026   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.222874   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.222955   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.223305   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.723062   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.723127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.723408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:12.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.223261   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.223620   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:12.223677   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:12.723275   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.723355   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.223538   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.223808   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.722888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.222590   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.222669   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.722580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.722918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:14.722969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:15.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:15.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.223255   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.223535   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.723326   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.723416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.723757   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:16.723819   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:17.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.222577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.222914   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:17.722474   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.722547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.722850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.222583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.722776   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.723111   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:19.222505   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.222594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.222859   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:19.222907   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:19.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.222684   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.223168   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.722431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.722497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.722767   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:21.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:21.223132   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:21.722822   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.723237   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.222916   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.222997   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.223321   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.723124   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.723201   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.723551   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:23.223344   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.223446   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.223810   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:23.223863   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:23.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.222565   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.722582   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.723045   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.222675   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.222956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.722490   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.722558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.722858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:25.722902   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:26.222992   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.223066   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:26.723227   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.723619   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.223499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:27.723024   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:28.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.222853   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:28.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.722950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.722755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.723172   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:29.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:30.222914   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.222992   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:30.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.722926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.222879   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.223214   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:32.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:32.222979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:32.722487   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.722557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.722887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:34.723033   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:35.222710   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.223174   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:35.722627   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.223126   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.223207   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.223553   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.723209   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.723639   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:36.723696   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:37.223303   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.223402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.223672   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:37.723463   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.723537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.723869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.222903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.723171   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.723241   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.723601   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:39.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.223483   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.223848   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:39.223901   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:39.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.222657   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.722746   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.723061   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.222889   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.222968   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.223319   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.722996   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.723258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:41.723307   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:42.223105   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.223674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:42.723351   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.723771   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.222466   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.222542   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.222830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.723509   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.723588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.723950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:43.724004   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:44.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.222958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:44.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.722910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.222798   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.223002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.223897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:46.224920   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.224987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.225286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:46.225327   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:46.723066   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.723140   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.723458   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.223236   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.223694   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.723477   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.723544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.723809   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.222493   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:48.723048   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:49.222689   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:49.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.222657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.223048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.722578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:51.222958   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.223044   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.223428   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:51.223484   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:51.723238   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.723326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.723667   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.223431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.223506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.223847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.723473   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.723546   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.723905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.222927   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.723407   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.723477   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.723780   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:53.723831   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:54.223258   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.223334   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.223684   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:54.723481   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.723559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.723887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.222605   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.222908   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:56.223029   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.223100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.223448   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:56.223504   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:56.723288   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.723362   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.723641   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.223504   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.223865   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.722469   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.722544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.722884   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.222923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.722693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.723034   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:58.723089   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:59.222617   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.223050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:59.722758   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.722838   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.222649   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.222741   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.722924   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.723002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.723336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:00.723407   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:01.223151   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.223227   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.223550   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:01.723316   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.723407   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.723750   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.222569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.722882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:03.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:03.223074   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:03.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.222840   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.722628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.722970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:05.222698   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.222780   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.223093   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:05.223142   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:05.722471   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.722549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.722864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.223041   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.223120   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.223579   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.723396   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.723824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.222893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.722605   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.722673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.723011   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:07.723085   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:08.222754   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.222842   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.223191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:08.722662   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.722736   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.723038   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.222745   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.223142   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.722861   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.723235   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:09.723279   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:10.222634   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:10.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.722638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.722937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.223528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.223600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.723112   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.723177   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.723461   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:11.723503   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:12.223246   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.223682   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:12.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.723593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.723946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.222536   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.722958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:14.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:14.223043   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:14.722452   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.722845   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.222537   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.222613   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.722573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.723240   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:16.222683   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.222764   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:16.223086   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:16.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.723021   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.722565   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.722636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:18.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:19.222506   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.222917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:19.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.222654   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.222725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.223047   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.722782   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.723050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:20.723099   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:21.223085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.223561   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:21.723342   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.723426   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.723759   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.223473   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.722720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.723089   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:22.723144   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:23.222822   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.222899   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.223255   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:23.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.722930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.722672   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.723136   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:24.723192   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:25.222510   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:25.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.223082   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.223153   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.223523   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.723172   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.723245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.723542   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:26.723585   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:27.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.223474   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.223854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:27.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.722624   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.223564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.722696   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.723057   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:29.222647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.223145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:29.223197   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:29.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.222740   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:31.222877   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.223216   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:31.223257   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:31.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.722986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.222702   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.222778   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.222731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.722763   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.722837   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.723186   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:33.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:34.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:34.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.222597   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.223031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.722870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:36.223103   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.223184   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.223557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:36.223614   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:36.723236   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.723314   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.723677   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.223456   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.223536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.223814   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.722521   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.222667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.222743   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.722867   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.722943   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.723253   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:38.723310   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:39.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:39.722686   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.723127   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.222805   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.222893   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.223247   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:41.223068   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.223147   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.223511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:41.223567   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:41.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.723402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.723663   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.223489   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.223566   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.223933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.723031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.222740   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.222816   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.223098   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.722622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:43.723044   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:44.222550   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:44.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.222681   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.222768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.223254   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.723085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.723156   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.723536   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:45.723592   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:46.223392   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.223709   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:46.722472   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.722550   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.722572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:48.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.222994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:48.223050   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:48.722729   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.722814   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.723224   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.222841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.222560   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.222640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.722647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.723039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:50.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:51.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.222961   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:51.723095   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.723527   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.223293   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.223365   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.223638   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.723480   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.723556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.723872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:52.723957   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:53.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:53.722667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.722737   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.222637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:55.223524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.223593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.223922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:55.223979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:55.722631   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.722706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.723040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.223219   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.223289   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.223644   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.723321   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.723712   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.223501   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.223578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.223899   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.722570   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.722944   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:57.722991   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:58.222513   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.222843   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:58.722524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.722929   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.722692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.723017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:59.723091   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:00.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:00.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.722961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.222893   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.222975   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.223252   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:02.222554   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.222634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.222965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:02.223025   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:02.722664   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.722731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.222669   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.722717   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.222582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.222867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:04.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:05.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.222660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.223001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:05.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.722529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.722796   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.223039   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.223118   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.223488   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.723240   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.723313   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.723661   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:06.723717   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:07.223477   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.223559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.223842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:07.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.223018   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.722509   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:09.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.222633   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:09.223037   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:09.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.723150   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.222456   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.222522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:11.222767   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.222843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:11.223242   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:11.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.222662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.722612   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.722687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.723066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.222774   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.222844   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.722797   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.722874   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.723220   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:13.723278   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:14.222938   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.223011   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.223370   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:14.723151   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.723218   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.723511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.223282   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.223353   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.223716   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.723508   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.723596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.723933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:15.723988   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:16.223075   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.223148   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.223467   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:16.723393   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.723870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.222594   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.222670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.222997   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.722611   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:18.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.222665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:18.223068   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:18.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.222898   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.722641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.722493   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.722564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.722881   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:20.722931   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:21.222827   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.222898   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.223270   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:21.722982   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.723059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.723422   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.223208   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.223282   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.223570   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.723366   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.723481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.723885   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:22.723946   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:23.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:23.722590   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.722671   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.723025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:25.223036   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:25.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.722630   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.722980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.223051   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.223127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.223495   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.723130   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.723210   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.723481   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:27.223253   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.223710   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:27.223764   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:27.723405   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.723490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.723850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.722973   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.222678   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.222749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.722594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.722851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:29.722889   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:30.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.222697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.223066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:30.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.222962   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.223058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.223464   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.723093   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.723169   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.723529   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:31.723588   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:32.223231   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.223306   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.223675   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:32.723451   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.723529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.723844   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.722693   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.722771   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.723118   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:34.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.222571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.222833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:34.222873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:34.722546   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.723016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.222749   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.223165   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.722851   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.722928   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.723193   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:36.223352   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.223828   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:36.223884   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:36.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.222713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.223007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.222795   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.223113   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.722765   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.722845   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.723210   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:38.723261   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:39.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:39.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.722951   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.222911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.722636   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.722713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:41.222845   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.222923   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.223258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:41.223312   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:41.722994   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.723058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.723414   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.223248   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.223346   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.223858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.722455   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.722526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.722872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:43.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.223489   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.223805   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:43.223855   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:43.722522   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.722966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.222629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:45.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.223076   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.223835   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:45.223976   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:45.722584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.223060   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.223131   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.223436   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.723272   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.723352   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.723748   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.222452   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.222527   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.722563   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:47.722953   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:48.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:48.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.723001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.222559   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.222906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:49.723031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:50.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.223020   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:50.723341   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.723685   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.223482   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.722933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:52.222677   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:52.223122   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:52.722795   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.722878   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.222957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.223005   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.722713   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.722788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.723170   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:54.723229   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:55.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.222912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:55.722613   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.223191   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.223262   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.223629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.723299   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.723703   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:56.723746   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:57.222463   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.222559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.222925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:57.722623   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.723053   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.222555   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.222627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.222882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:59.222596   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.222674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:59.223071   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:59.722703   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.722774   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.222680   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.722902   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.722974   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.723300   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:01.223304   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.223397   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.223655   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:01.223703   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:01.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.723563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.723888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.222658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.223040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.722720   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.722789   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.723094   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.222643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.722718   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.722800   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.723133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:03.723188   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:04.222478   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.222547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.222820   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:04.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.222941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.722915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:06.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.223136   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.223522   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:06.223575   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:06.723193   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.723275   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.723670   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.223474   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.223549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.223817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.222651   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.222735   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.722864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:08.723216   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:09.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:09.722671   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.722749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.723103   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.222760   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.222832   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.223102   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.722631   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.722988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:11.222770   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.222841   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.223177   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:11.223230   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:11.723479   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.723562   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.222547   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.222623   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.222981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.722772   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.723109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:13.723015   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:14.222726   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.222802   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:14.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.722912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.222538   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.722981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:16.222929   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.223005   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:16.223275   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:16.223314   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:16.723228   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.723311   48683 node_ready.go:38] duration metric: took 6m0.000967258s for node "functional-090986" to be "Ready" ...
	I1206 08:53:16.726672   48683 out.go:203] 
	W1206 08:53:16.729718   48683 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 08:53:16.729749   48683 out.go:285] * 
	W1206 08:53:16.732326   48683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:53:16.735459   48683 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685515413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685539175Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685585510Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685633936Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685656131Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685667889Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685677202Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685694884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685715749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.685747626Z" level=info msg="Connect containerd service"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.686095638Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.686870939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.705986055Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.706051458Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.706080389Z" level=info msg="Start subscribing containerd event"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.706125444Z" level=info msg="Start recovering state"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748802723Z" level=info msg="Start event monitor"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748865804Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748876331Z" level=info msg="Start streaming server"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748885316Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748893349Z" level=info msg="runtime interface starting up..."
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748899814Z" level=info msg="starting plugins..."
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.748911703Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 08:47:13 functional-090986 containerd[5266]: time="2025-12-06T08:47:13.749245635Z" level=info msg="containerd successfully booted in 0.086829s"
	Dec 06 08:47:13 functional-090986 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:53:21.118900    8620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:21.119334    8620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:21.121067    8620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:21.122690    8620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:21.123174    8620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> kernel <==
	 08:53:21 up 35 min,  0 user,  load average: 0.34, 0.28, 0.54
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 08:53:17 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:18 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 06 08:53:18 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:18 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:18 functional-090986 kubelet[8432]: E1206 08:53:18.534878    8432 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:18 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:18 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:19 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 06 08:53:19 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:19 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:19 functional-090986 kubelet[8494]: E1206 08:53:19.313699    8494 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:19 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:19 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:19 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 06 08:53:19 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:19 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:20 functional-090986 kubelet[8515]: E1206 08:53:20.035771    8515 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:20 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:20 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:20 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 06 08:53:20 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:20 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:20 functional-090986 kubelet[8537]: E1206 08:53:20.771507    8537 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:20 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:20 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (353.6221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 kubectl -- --context functional-090986 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 kubectl -- --context functional-090986 get pods: exit status 1 (100.090045ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-090986 kubectl -- --context functional-090986 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (296.895109ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-090986 logs -n 25: (1.033089757s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-181746 image build -t localhost/my-image:functional-181746 testdata/build --alsologtostderr                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format json --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format table --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls                                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ delete         │ -p functional-181746                                                                                                                                    │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ start          │ -p functional-090986 --alsologtostderr -v=8                                                                                                             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:47 UTC │                     │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:latest                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add minikube-local-cache-test:functional-090986                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache delete minikube-local-cache-test:functional-090986                                                                              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl images                                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	│ cache          │ functional-090986 cache reload                                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ kubectl        │ functional-090986 kubectl -- --context functional-090986 get pods                                                                                       │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:47:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:47:11.094911   48683 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:47:11.095050   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095060   48683 out.go:374] Setting ErrFile to fd 2...
	I1206 08:47:11.095065   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095329   48683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:47:11.095763   48683 out.go:368] Setting JSON to false
	I1206 08:47:11.096588   48683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1782,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:47:11.096668   48683 start.go:143] virtualization:  
	I1206 08:47:11.100026   48683 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:47:11.103775   48683 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:47:11.103977   48683 notify.go:221] Checking for updates...
	I1206 08:47:11.109719   48683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:47:11.112668   48683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:11.115549   48683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:47:11.118516   48683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:47:11.121495   48683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:47:11.124961   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:11.125074   48683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:47:11.149854   48683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:47:11.149988   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.212959   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.203697623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.213084   48683 docker.go:319] overlay module found
	I1206 08:47:11.216243   48683 out.go:179] * Using the docker driver based on existing profile
	I1206 08:47:11.219285   48683 start.go:309] selected driver: docker
	I1206 08:47:11.219311   48683 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.219451   48683 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:47:11.219560   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.284944   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.27604915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.285369   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:11.285438   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:11.285486   48683 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.289257   48683 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:47:11.292082   48683 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:47:11.295206   48683 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:47:11.298095   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:11.298152   48683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:47:11.298166   48683 cache.go:65] Caching tarball of preloaded images
	I1206 08:47:11.298170   48683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:47:11.298253   48683 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:47:11.298264   48683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:47:11.298374   48683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:47:11.317301   48683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:47:11.317323   48683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:47:11.317345   48683 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:47:11.317377   48683 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:47:11.317445   48683 start.go:364] duration metric: took 50.847µs to acquireMachinesLock for "functional-090986"
	I1206 08:47:11.317466   48683 start.go:96] Skipping create...Using existing machine configuration
	I1206 08:47:11.317471   48683 fix.go:54] fixHost starting: 
	I1206 08:47:11.317772   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:11.334567   48683 fix.go:112] recreateIfNeeded on functional-090986: state=Running err=<nil>
	W1206 08:47:11.334595   48683 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 08:47:11.337684   48683 out.go:252] * Updating the running docker "functional-090986" container ...
	I1206 08:47:11.337717   48683 machine.go:94] provisionDockerMachine start ...
	I1206 08:47:11.337795   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.354534   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.354869   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.354883   48683 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:47:11.507058   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.507088   48683 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:47:11.507161   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.525196   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.525520   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.525537   48683 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:47:11.684471   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.684556   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.702187   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.702515   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.702540   48683 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:47:11.859622   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:47:11.859650   48683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:47:11.859671   48683 ubuntu.go:190] setting up certificates
	I1206 08:47:11.859680   48683 provision.go:84] configureAuth start
	I1206 08:47:11.859747   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:11.877706   48683 provision.go:143] copyHostCerts
	I1206 08:47:11.877750   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877787   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:47:11.877800   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877873   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:47:11.877976   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.877997   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:47:11.878007   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.878035   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:47:11.878088   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878108   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:47:11.878114   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878140   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:47:11.878192   48683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:47:12.018564   48683 provision.go:177] copyRemoteCerts
	I1206 08:47:12.018632   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:47:12.018672   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.036577   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.143156   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 08:47:12.143226   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:47:12.160243   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 08:47:12.160303   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 08:47:12.177568   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 08:47:12.177628   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:47:12.194504   48683 provision.go:87] duration metric: took 334.802128ms to configureAuth
	I1206 08:47:12.194543   48683 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:47:12.194717   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:12.194725   48683 machine.go:97] duration metric: took 857.000255ms to provisionDockerMachine
	I1206 08:47:12.194732   48683 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:47:12.194743   48683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:47:12.194796   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:47:12.194842   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.212073   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.315270   48683 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:47:12.318678   48683 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 08:47:12.318701   48683 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 08:47:12.318706   48683 command_runner.go:130] > VERSION_ID="12"
	I1206 08:47:12.318711   48683 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 08:47:12.318717   48683 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 08:47:12.318720   48683 command_runner.go:130] > ID=debian
	I1206 08:47:12.318724   48683 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 08:47:12.318730   48683 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 08:47:12.318735   48683 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 08:47:12.318975   48683 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:47:12.319002   48683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:47:12.319013   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:47:12.319072   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:47:12.319161   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:47:12.319172   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /etc/ssl/certs/42922.pem
	I1206 08:47:12.319246   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:47:12.319253   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> /etc/test/nested/copy/4292/hosts
	I1206 08:47:12.319298   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:47:12.327031   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:12.344679   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:47:12.363077   48683 start.go:296] duration metric: took 168.329595ms for postStartSetup
	I1206 08:47:12.363152   48683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:47:12.363210   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.380353   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.487060   48683 command_runner.go:130] > 11%
	I1206 08:47:12.487699   48683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:47:12.493338   48683 command_runner.go:130] > 174G
	I1206 08:47:12.494716   48683 fix.go:56] duration metric: took 1.177238165s for fixHost
	I1206 08:47:12.494741   48683 start.go:83] releasing machines lock for "functional-090986", held for 1.177286419s
	I1206 08:47:12.494813   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:12.512960   48683 ssh_runner.go:195] Run: cat /version.json
	I1206 08:47:12.513022   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.513272   48683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:47:12.513331   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.541090   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.554766   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.647127   48683 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 08:47:12.647264   48683 ssh_runner.go:195] Run: systemctl --version
	I1206 08:47:12.750867   48683 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 08:47:12.751021   48683 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 08:47:12.751059   48683 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 08:47:12.751151   48683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 08:47:12.755609   48683 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 08:47:12.756103   48683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:47:12.756176   48683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:47:12.764393   48683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 08:47:12.764420   48683 start.go:496] detecting cgroup driver to use...
	I1206 08:47:12.764452   48683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:47:12.764507   48683 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:47:12.779951   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:47:12.793243   48683 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:47:12.793324   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:47:12.809005   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:47:12.823043   48683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:47:12.939696   48683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:47:13.060632   48683 docker.go:234] disabling docker service ...
	I1206 08:47:13.060721   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:47:13.078332   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:47:13.093719   48683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:47:13.229319   48683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:47:13.368814   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:47:13.381432   48683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:47:13.395011   48683 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 08:47:13.396419   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:47:13.405770   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:47:13.415310   48683 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:47:13.415505   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:47:13.424963   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.433399   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:47:13.442072   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.450816   48683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:47:13.458824   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:47:13.467776   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:47:13.477145   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:47:13.486457   48683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:47:13.493910   48683 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 08:47:13.494986   48683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:47:13.503356   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:13.622996   48683 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:47:13.753042   48683 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:47:13.753133   48683 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:47:13.757647   48683 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 08:47:13.757672   48683 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 08:47:13.757681   48683 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1206 08:47:13.757689   48683 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:13.757724   48683 command_runner.go:130] > Access: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757736   48683 command_runner.go:130] > Modify: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757742   48683 command_runner.go:130] > Change: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757746   48683 command_runner.go:130] >  Birth: -
	I1206 08:47:13.757803   48683 start.go:564] Will wait 60s for crictl version
	I1206 08:47:13.757883   48683 ssh_runner.go:195] Run: which crictl
	I1206 08:47:13.761846   48683 command_runner.go:130] > /usr/local/bin/crictl
	I1206 08:47:13.761974   48683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:47:13.786269   48683 command_runner.go:130] > Version:  0.1.0
	I1206 08:47:13.786289   48683 command_runner.go:130] > RuntimeName:  containerd
	I1206 08:47:13.786295   48683 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 08:47:13.786302   48683 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 08:47:13.788604   48683 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:47:13.788708   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.809864   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.811926   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.831700   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.839817   48683 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:47:13.842721   48683 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:47:13.858999   48683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:47:13.862710   48683 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 08:47:13.862939   48683 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:47:13.863057   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:13.863132   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.889556   48683 command_runner.go:130] > {
	I1206 08:47:13.889580   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.889586   48683 command_runner.go:130] >     {
	I1206 08:47:13.889601   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.889607   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889612   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.889616   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889619   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889628   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.889635   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889640   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.889652   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889657   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889661   48683 command_runner.go:130] >     },
	I1206 08:47:13.889664   48683 command_runner.go:130] >     {
	I1206 08:47:13.889672   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.889676   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889681   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.889687   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889691   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889707   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.889710   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889715   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.889725   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889729   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889733   48683 command_runner.go:130] >     },
	I1206 08:47:13.889736   48683 command_runner.go:130] >     {
	I1206 08:47:13.889743   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.889752   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889767   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.889770   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889777   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889785   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.889792   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889796   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.889800   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.889808   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889815   48683 command_runner.go:130] >     },
	I1206 08:47:13.889818   48683 command_runner.go:130] >     {
	I1206 08:47:13.889825   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.889829   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889837   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.889841   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889844   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889852   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.889863   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889867   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.889871   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889875   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889885   48683 command_runner.go:130] >       },
	I1206 08:47:13.889889   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889892   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889896   48683 command_runner.go:130] >     },
	I1206 08:47:13.889899   48683 command_runner.go:130] >     {
	I1206 08:47:13.889906   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.889912   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889918   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.889920   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889925   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889933   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.889937   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889945   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.889949   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889960   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889964   48683 command_runner.go:130] >       },
	I1206 08:47:13.889970   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889975   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889987   48683 command_runner.go:130] >     },
	I1206 08:47:13.890022   48683 command_runner.go:130] >     {
	I1206 08:47:13.890033   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.890037   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890043   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.890049   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890054   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890064   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.890070   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890075   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.890078   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890082   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890087   48683 command_runner.go:130] >       },
	I1206 08:47:13.890092   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890098   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890102   48683 command_runner.go:130] >     },
	I1206 08:47:13.890105   48683 command_runner.go:130] >     {
	I1206 08:47:13.890112   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.890115   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890121   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.890124   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890139   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.890145   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890149   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.890153   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890156   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890159   48683 command_runner.go:130] >     },
	I1206 08:47:13.890170   48683 command_runner.go:130] >     {
	I1206 08:47:13.890177   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.890181   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890187   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.890190   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890206   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.890215   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890223   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.890228   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890231   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890235   48683 command_runner.go:130] >       },
	I1206 08:47:13.890239   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890250   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890254   48683 command_runner.go:130] >     },
	I1206 08:47:13.890257   48683 command_runner.go:130] >     {
	I1206 08:47:13.890264   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.890272   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890277   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.890280   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890284   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890291   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.890294   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890298   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.890305   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890310   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.890315   48683 command_runner.go:130] >       },
	I1206 08:47:13.890319   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890331   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.890335   48683 command_runner.go:130] >     }
	I1206 08:47:13.890337   48683 command_runner.go:130] >   ]
	I1206 08:47:13.890340   48683 command_runner.go:130] > }
	I1206 08:47:13.892630   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.892653   48683 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:47:13.892734   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.915064   48683 command_runner.go:130] > {
	I1206 08:47:13.915085   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.915091   48683 command_runner.go:130] >     {
	I1206 08:47:13.915102   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.915109   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915115   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.915119   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915142   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.915149   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915153   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.915157   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915161   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915164   48683 command_runner.go:130] >     },
	I1206 08:47:13.915167   48683 command_runner.go:130] >     {
	I1206 08:47:13.915178   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.915184   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915189   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.915193   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915208   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.915214   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915218   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.915222   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915225   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915228   48683 command_runner.go:130] >     },
	I1206 08:47:13.915231   48683 command_runner.go:130] >     {
	I1206 08:47:13.915238   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.915245   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915251   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.915254   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915262   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915270   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.915275   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915279   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.915286   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.915291   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915295   48683 command_runner.go:130] >     },
	I1206 08:47:13.915298   48683 command_runner.go:130] >     {
	I1206 08:47:13.915305   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.915311   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915320   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.915324   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915328   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915338   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.915341   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915345   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.915349   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915352   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915359   48683 command_runner.go:130] >       },
	I1206 08:47:13.915363   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915410   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915414   48683 command_runner.go:130] >     },
	I1206 08:47:13.915418   48683 command_runner.go:130] >     {
	I1206 08:47:13.915424   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.915428   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915434   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.915437   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915441   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915448   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.915451   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915455   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.915458   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915471   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915474   48683 command_runner.go:130] >       },
	I1206 08:47:13.915478   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915481   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915484   48683 command_runner.go:130] >     },
	I1206 08:47:13.915487   48683 command_runner.go:130] >     {
	I1206 08:47:13.915494   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.915497   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915503   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.915506   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915509   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915523   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.915526   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915530   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.915534   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915540   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915543   48683 command_runner.go:130] >       },
	I1206 08:47:13.915547   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915550   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915553   48683 command_runner.go:130] >     },
	I1206 08:47:13.915556   48683 command_runner.go:130] >     {
	I1206 08:47:13.915563   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.915580   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915585   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.915588   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915592   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915601   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.915608   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915612   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.915616   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915620   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915622   48683 command_runner.go:130] >     },
	I1206 08:47:13.915626   48683 command_runner.go:130] >     {
	I1206 08:47:13.915635   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.915649   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915655   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.915658   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915662   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915670   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.915676   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915680   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.915684   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915687   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915691   48683 command_runner.go:130] >       },
	I1206 08:47:13.915699   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915706   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915710   48683 command_runner.go:130] >     },
	I1206 08:47:13.915713   48683 command_runner.go:130] >     {
	I1206 08:47:13.915720   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.915723   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915728   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.915731   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915735   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915746   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.915752   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915756   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.915760   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915764   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.915777   48683 command_runner.go:130] >       },
	I1206 08:47:13.915781   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915785   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.915790   48683 command_runner.go:130] >     }
	I1206 08:47:13.915793   48683 command_runner.go:130] >   ]
	I1206 08:47:13.915796   48683 command_runner.go:130] > }
	I1206 08:47:13.917976   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.917998   48683 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:47:13.918006   48683 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:47:13.918108   48683 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:47:13.918181   48683 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:47:13.946472   48683 command_runner.go:130] > {
	I1206 08:47:13.946489   48683 command_runner.go:130] >   "cniconfig": {
	I1206 08:47:13.946494   48683 command_runner.go:130] >     "Networks": [
	I1206 08:47:13.946497   48683 command_runner.go:130] >       {
	I1206 08:47:13.946502   48683 command_runner.go:130] >         "Config": {
	I1206 08:47:13.946507   48683 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 08:47:13.946512   48683 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 08:47:13.946516   48683 command_runner.go:130] >           "Plugins": [
	I1206 08:47:13.946520   48683 command_runner.go:130] >             {
	I1206 08:47:13.946524   48683 command_runner.go:130] >               "Network": {
	I1206 08:47:13.946529   48683 command_runner.go:130] >                 "ipam": {},
	I1206 08:47:13.946537   48683 command_runner.go:130] >                 "type": "loopback"
	I1206 08:47:13.946541   48683 command_runner.go:130] >               },
	I1206 08:47:13.946554   48683 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 08:47:13.946558   48683 command_runner.go:130] >             }
	I1206 08:47:13.946561   48683 command_runner.go:130] >           ],
	I1206 08:47:13.946573   48683 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 08:47:13.946581   48683 command_runner.go:130] >         },
	I1206 08:47:13.946586   48683 command_runner.go:130] >         "IFName": "lo"
	I1206 08:47:13.946590   48683 command_runner.go:130] >       }
	I1206 08:47:13.946593   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946597   48683 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 08:47:13.946601   48683 command_runner.go:130] >     "PluginDirs": [
	I1206 08:47:13.946605   48683 command_runner.go:130] >       "/opt/cni/bin"
	I1206 08:47:13.946609   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946613   48683 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 08:47:13.946617   48683 command_runner.go:130] >     "Prefix": "eth"
	I1206 08:47:13.946620   48683 command_runner.go:130] >   },
	I1206 08:47:13.946623   48683 command_runner.go:130] >   "config": {
	I1206 08:47:13.946627   48683 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 08:47:13.946630   48683 command_runner.go:130] >       "/etc/cdi",
	I1206 08:47:13.946636   48683 command_runner.go:130] >       "/var/run/cdi"
	I1206 08:47:13.946640   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946643   48683 command_runner.go:130] >     "cni": {
	I1206 08:47:13.946646   48683 command_runner.go:130] >       "binDir": "",
	I1206 08:47:13.946650   48683 command_runner.go:130] >       "binDirs": [
	I1206 08:47:13.946653   48683 command_runner.go:130] >         "/opt/cni/bin"
	I1206 08:47:13.946656   48683 command_runner.go:130] >       ],
	I1206 08:47:13.946661   48683 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 08:47:13.946665   48683 command_runner.go:130] >       "confTemplate": "",
	I1206 08:47:13.946668   48683 command_runner.go:130] >       "ipPref": "",
	I1206 08:47:13.946672   48683 command_runner.go:130] >       "maxConfNum": 1,
	I1206 08:47:13.946676   48683 command_runner.go:130] >       "setupSerially": false,
	I1206 08:47:13.946680   48683 command_runner.go:130] >       "useInternalLoopback": false
	I1206 08:47:13.946683   48683 command_runner.go:130] >     },
	I1206 08:47:13.946688   48683 command_runner.go:130] >     "containerd": {
	I1206 08:47:13.946696   48683 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 08:47:13.946701   48683 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 08:47:13.946706   48683 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 08:47:13.946710   48683 command_runner.go:130] >       "runtimes": {
	I1206 08:47:13.946713   48683 command_runner.go:130] >         "runc": {
	I1206 08:47:13.946718   48683 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 08:47:13.946722   48683 command_runner.go:130] >           "PodAnnotations": null,
	I1206 08:47:13.946728   48683 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 08:47:13.946733   48683 command_runner.go:130] >           "cgroupWritable": false,
	I1206 08:47:13.946738   48683 command_runner.go:130] >           "cniConfDir": "",
	I1206 08:47:13.946742   48683 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 08:47:13.946745   48683 command_runner.go:130] >           "io_type": "",
	I1206 08:47:13.946748   48683 command_runner.go:130] >           "options": {
	I1206 08:47:13.946752   48683 command_runner.go:130] >             "BinaryName": "",
	I1206 08:47:13.946756   48683 command_runner.go:130] >             "CriuImagePath": "",
	I1206 08:47:13.946761   48683 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 08:47:13.946764   48683 command_runner.go:130] >             "IoGid": 0,
	I1206 08:47:13.946768   48683 command_runner.go:130] >             "IoUid": 0,
	I1206 08:47:13.946772   48683 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 08:47:13.946776   48683 command_runner.go:130] >             "Root": "",
	I1206 08:47:13.946780   48683 command_runner.go:130] >             "ShimCgroup": "",
	I1206 08:47:13.946784   48683 command_runner.go:130] >             "SystemdCgroup": false
	I1206 08:47:13.946787   48683 command_runner.go:130] >           },
	I1206 08:47:13.946793   48683 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 08:47:13.946799   48683 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 08:47:13.946803   48683 command_runner.go:130] >           "runtimePath": "",
	I1206 08:47:13.946808   48683 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 08:47:13.946812   48683 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 08:47:13.946816   48683 command_runner.go:130] >           "snapshotter": ""
	I1206 08:47:13.946820   48683 command_runner.go:130] >         }
	I1206 08:47:13.946823   48683 command_runner.go:130] >       }
	I1206 08:47:13.946826   48683 command_runner.go:130] >     },
	I1206 08:47:13.946836   48683 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 08:47:13.946848   48683 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 08:47:13.946854   48683 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 08:47:13.946858   48683 command_runner.go:130] >     "disableApparmor": false,
	I1206 08:47:13.946863   48683 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 08:47:13.946867   48683 command_runner.go:130] >     "disableProcMount": false,
	I1206 08:47:13.946871   48683 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 08:47:13.946874   48683 command_runner.go:130] >     "enableCDI": true,
	I1206 08:47:13.946878   48683 command_runner.go:130] >     "enableSelinux": false,
	I1206 08:47:13.946883   48683 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 08:47:13.946887   48683 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 08:47:13.946891   48683 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 08:47:13.946896   48683 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 08:47:13.946900   48683 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 08:47:13.946905   48683 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 08:47:13.946909   48683 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 08:47:13.946917   48683 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946922   48683 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 08:47:13.946928   48683 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946932   48683 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 08:47:13.946937   48683 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 08:47:13.946940   48683 command_runner.go:130] >   },
	I1206 08:47:13.946943   48683 command_runner.go:130] >   "features": {
	I1206 08:47:13.946948   48683 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 08:47:13.946951   48683 command_runner.go:130] >   },
	I1206 08:47:13.946955   48683 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 08:47:13.946964   48683 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946974   48683 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946977   48683 command_runner.go:130] >   "runtimeHandlers": [
	I1206 08:47:13.946980   48683 command_runner.go:130] >     {
	I1206 08:47:13.946984   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.946988   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.946992   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.946996   48683 command_runner.go:130] >       }
	I1206 08:47:13.947002   48683 command_runner.go:130] >     },
	I1206 08:47:13.947006   48683 command_runner.go:130] >     {
	I1206 08:47:13.947009   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.947015   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.947019   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.947022   48683 command_runner.go:130] >       },
	I1206 08:47:13.947026   48683 command_runner.go:130] >       "name": "runc"
	I1206 08:47:13.947029   48683 command_runner.go:130] >     }
	I1206 08:47:13.947032   48683 command_runner.go:130] >   ],
	I1206 08:47:13.947035   48683 command_runner.go:130] >   "status": {
	I1206 08:47:13.947039   48683 command_runner.go:130] >     "conditions": [
	I1206 08:47:13.947042   48683 command_runner.go:130] >       {
	I1206 08:47:13.947046   48683 command_runner.go:130] >         "message": "",
	I1206 08:47:13.947050   48683 command_runner.go:130] >         "reason": "",
	I1206 08:47:13.947053   48683 command_runner.go:130] >         "status": true,
	I1206 08:47:13.947059   48683 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 08:47:13.947062   48683 command_runner.go:130] >       },
	I1206 08:47:13.947065   48683 command_runner.go:130] >       {
	I1206 08:47:13.947072   48683 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 08:47:13.947081   48683 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 08:47:13.947085   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947089   48683 command_runner.go:130] >         "type": "NetworkReady"
	I1206 08:47:13.947091   48683 command_runner.go:130] >       },
	I1206 08:47:13.947094   48683 command_runner.go:130] >       {
	I1206 08:47:13.947118   48683 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 08:47:13.947123   48683 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 08:47:13.947129   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947134   48683 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 08:47:13.947137   48683 command_runner.go:130] >       }
	I1206 08:47:13.947139   48683 command_runner.go:130] >     ]
	I1206 08:47:13.947142   48683 command_runner.go:130] >   }
	I1206 08:47:13.947144   48683 command_runner.go:130] > }
	I1206 08:47:13.947502   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:13.947519   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:13.947541   48683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:47:13.947564   48683 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:47:13.947673   48683 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:47:13.947742   48683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:47:13.955523   48683 command_runner.go:130] > kubeadm
	I1206 08:47:13.955542   48683 command_runner.go:130] > kubectl
	I1206 08:47:13.955546   48683 command_runner.go:130] > kubelet
	I1206 08:47:13.955560   48683 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:47:13.955622   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:47:13.963242   48683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:47:13.976514   48683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:47:13.994365   48683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 08:47:14.008131   48683 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:47:14.012074   48683 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 08:47:14.012170   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:14.162349   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:14.970935   48683 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:47:14.971004   48683 certs.go:195] generating shared ca certs ...
	I1206 08:47:14.971035   48683 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:14.971212   48683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:47:14.971308   48683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:47:14.971340   48683 certs.go:257] generating profile certs ...
	I1206 08:47:14.971529   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:47:14.971755   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:47:14.971844   48683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:47:14.971869   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 08:47:14.971914   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 08:47:14.971945   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 08:47:14.971989   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 08:47:14.972021   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 08:47:14.972053   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 08:47:14.972085   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 08:47:14.972115   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 08:47:14.972198   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:47:14.972259   48683 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:47:14.972284   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:47:14.972342   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:47:14.972394   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:47:14.972452   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:47:14.972528   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:14.972579   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:14.972619   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem -> /usr/share/ca-certificates/4292.pem
	I1206 08:47:14.972659   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /usr/share/ca-certificates/42922.pem
	I1206 08:47:14.973224   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:47:14.995297   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:47:15.042161   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:47:15.062885   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:47:15.082018   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:47:15.101436   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:47:15.120061   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:47:15.140257   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:47:15.160107   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:47:15.178980   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:47:15.197893   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:47:15.216224   48683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:47:15.229330   48683 ssh_runner.go:195] Run: openssl version
	I1206 08:47:15.235331   48683 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 08:47:15.235817   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.243429   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:47:15.250764   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254643   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254673   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254723   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.295906   48683 command_runner.go:130] > b5213941
	I1206 08:47:15.295990   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:47:15.303441   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.310784   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:47:15.318504   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322051   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322380   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322461   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.363237   48683 command_runner.go:130] > 51391683
	I1206 08:47:15.363703   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:47:15.371299   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.378918   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:47:15.386367   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390281   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390354   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390410   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.431004   48683 command_runner.go:130] > 3ec20f2e
	I1206 08:47:15.431441   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:47:15.439072   48683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442819   48683 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442856   48683 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 08:47:15.442863   48683 command_runner.go:130] > Device: 259,1	Inode: 1055659     Links: 1
	I1206 08:47:15.442870   48683 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:15.442877   48683 command_runner.go:130] > Access: 2025-12-06 08:43:07.824678266 +0000
	I1206 08:47:15.442882   48683 command_runner.go:130] > Modify: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442890   48683 command_runner.go:130] > Change: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442895   48683 command_runner.go:130] >  Birth: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442956   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 08:47:15.483144   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.483601   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 08:47:15.524376   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.524527   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 08:47:15.567333   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.567897   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 08:47:15.609722   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.610195   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 08:47:15.652939   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.653458   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 08:47:15.694815   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.695278   48683 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:15.695370   48683 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:47:15.695465   48683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:47:15.724990   48683 cri.go:89] found id: ""
	I1206 08:47:15.725064   48683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:47:15.732181   48683 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 08:47:15.732210   48683 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 08:47:15.732217   48683 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 08:47:15.733102   48683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 08:47:15.733116   48683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 08:47:15.733169   48683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 08:47:15.740768   48683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:47:15.741168   48683 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-090986" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.741273   48683 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "functional-090986" cluster setting kubeconfig missing "functional-090986" context setting]
	I1206 08:47:15.741558   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.741975   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.742128   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.742650   48683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 08:47:15.742669   48683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 08:47:15.742675   48683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 08:47:15.742680   48683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 08:47:15.742685   48683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 08:47:15.742976   48683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 08:47:15.743070   48683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 08:47:15.750828   48683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 08:47:15.750861   48683 kubeadm.go:602] duration metric: took 17.739612ms to restartPrimaryControlPlane
	I1206 08:47:15.750871   48683 kubeadm.go:403] duration metric: took 55.600148ms to StartCluster
	I1206 08:47:15.750890   48683 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.750966   48683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.751639   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.751842   48683 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 08:47:15.752180   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:15.752232   48683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 08:47:15.752302   48683 addons.go:70] Setting storage-provisioner=true in profile "functional-090986"
	I1206 08:47:15.752319   48683 addons.go:239] Setting addon storage-provisioner=true in "functional-090986"
	I1206 08:47:15.752322   48683 addons.go:70] Setting default-storageclass=true in profile "functional-090986"
	I1206 08:47:15.752340   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.752341   48683 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-090986"
	I1206 08:47:15.752637   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.752784   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.759188   48683 out.go:179] * Verifying Kubernetes components...
	I1206 08:47:15.762058   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:15.783651   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.783826   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.785192   48683 addons.go:239] Setting addon default-storageclass=true in "functional-090986"
	I1206 08:47:15.785238   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.785700   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.797451   48683 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 08:47:15.800625   48683 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:15.800648   48683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 08:47:15.800725   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.810048   48683 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:15.810080   48683 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 08:47:15.810147   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.824818   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.853374   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.963935   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:15.994167   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.016409   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:16.722308   48683 node_ready.go:35] waiting up to 6m0s for node "functional-090986" to be "Ready" ...
	I1206 08:47:16.722441   48683 type.go:168] "Request Body" body=""
	I1206 08:47:16.722509   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:16.722791   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:16.722979   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722997   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723021   48683 retry.go:31] will retry after 246.599259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722932   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723088   48683 retry.go:31] will retry after 155.728524ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.879530   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.938491   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.942697   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.942739   48683 retry.go:31] will retry after 198.095926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.969843   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.032387   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.037081   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.037167   48683 retry.go:31] will retry after 340.655262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.141488   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:17.200483   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.200581   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.200607   48683 retry.go:31] will retry after 823.921965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:17.378343   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.437909   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.437949   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.437997   48683 retry.go:31] will retry after 597.373907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.723431   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.723506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.723862   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.025532   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:18.036222   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:18.102548   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.106195   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.106289   48683 retry.go:31] will retry after 988.595122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128444   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.128537   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128579   48683 retry.go:31] will retry after 1.22957213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.222734   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.222810   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.223190   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.722737   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.722827   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.723191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:18.723277   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:19.095767   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:19.151460   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.155168   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.155201   48683 retry.go:31] will retry after 1.717558752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.223503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.223595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.223937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:19.358372   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:19.411770   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.415269   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.415303   48683 retry.go:31] will retry after 781.287082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.722942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.197734   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:20.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.223196   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.223547   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.262283   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.262363   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.262407   48683 retry.go:31] will retry after 1.829414459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.723284   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:20.723338   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:20.873661   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:20.932799   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.936985   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.937020   48683 retry.go:31] will retry after 2.554499586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:21.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.223934   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:21.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.722674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.723048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.092657   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:22.149785   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:22.153326   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.153368   48683 retry.go:31] will retry after 2.084938041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.222743   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.722901   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.722987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.723330   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:22.723402   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:23.223196   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.223285   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.223660   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:23.492173   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:23.557652   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:23.557715   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.557741   48683 retry.go:31] will retry after 4.19827742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.723091   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.723482   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.223263   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.223339   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.223623   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.238906   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:24.307275   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:24.307320   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.307339   48683 retry.go:31] will retry after 4.494270685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.722877   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.723244   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:25.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.223006   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.223365   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:25.223455   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:25.723213   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.723596   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.223491   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.223588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.722621   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.722699   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.222525   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.222628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.222892   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:27.723035   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:27.756528   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:27.814954   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:27.818792   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:27.818824   48683 retry.go:31] will retry after 5.399057422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.223490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.223811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.723414   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.723485   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.723794   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.802108   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:28.864913   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:28.864953   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.864972   48683 retry.go:31] will retry after 3.285056528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:29.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.223556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.223857   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:29.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.723030   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:29.723087   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:30.222650   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.222720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.223035   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:30.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.222982   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.223061   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.723202   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.723273   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.723614   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:31.723661   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:32.150291   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:32.207920   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:32.211781   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.211813   48683 retry.go:31] will retry after 10.805243336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.223541   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:32.723329   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.723744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.218182   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:33.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.222677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.295753   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:33.295946   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.295967   48683 retry.go:31] will retry after 9.227502372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:33.723973   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:34.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.222681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.223037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:34.723424   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.723499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.723811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.222963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.722678   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.723029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:36.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.223195   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.223476   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:36.223516   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:36.723305   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.723388   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.723674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.223484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.223557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.223866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.723315   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.723395   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.723693   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:38.223481   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.223887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:38.223937   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:38.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.723024   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.223350   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.223435   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.223711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.723507   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.723587   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.723926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.222602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.723573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.723901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:40.723952   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:41.222532   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.222606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:41.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.723083   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.222810   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.222891   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.223201   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.523700   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:42.586651   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:42.586695   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.586713   48683 retry.go:31] will retry after 12.2898811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.723024   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.723100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.723445   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:43.017838   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:43.079371   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:43.079435   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.079458   48683 retry.go:31] will retry after 19.494910144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.222603   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:43.223199   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:43.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.722697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.722959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.222614   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.722637   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.723067   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:45.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:45.223228   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:45.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.722969   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.223003   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.223089   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.223469   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.723273   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.723345   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.723681   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:47.223099   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.223167   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.223496   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:47.223542   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:47.723308   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.723392   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.223454   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.223802   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.722998   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.722484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.722561   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.722823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:49.722870   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:50.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.222659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:50.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.723086   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.222858   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.222936   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.723311   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.723634   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:51.723682   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:52.223453   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.223522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.223869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:52.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.722897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.222985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.722770   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.723108   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.222905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:54.222955   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:54.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.722660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.723065   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.877464   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:54.933804   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:54.937955   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:54.937987   48683 retry.go:31] will retry after 17.91075527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:55.223442   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.223852   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:55.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.722606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:56.222999   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.223070   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:56.223487   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:56.723218   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.723287   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.723646   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.223125   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.223203   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.223494   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.722995   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.723069   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.723443   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:58.223117   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.223189   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.223566   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:58.223620   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:58.723372   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.723711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.223465   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.223912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.722939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:00.247414   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.247503   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.247882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:00.247935   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:00.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.722938   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.222887   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.222999   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.223358   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.723162   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.723235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.723597   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.223493   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.223823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.575367   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:02.637904   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:02.637958   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.637977   48683 retry.go:31] will retry after 12.943468008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.723120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.723231   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.723512   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:02.723552   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:03.223325   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.223416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.223738   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:03.723412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.723492   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.222479   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.222557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.222823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.722559   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:05.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.222783   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:05.223222   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.722946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.223159   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.223264   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.223665   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.723461   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.723536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.723855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.222524   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.222592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.722594   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.722670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.723027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:07.723084   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:08.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.222686   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.223036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:08.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.222614   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:10.223441   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.223507   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.223798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:10.223853   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:10.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.723077   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.222928   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.223022   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.223407   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.723308   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.723611   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.223404   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.223497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:12.223876   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:12.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.849275   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:12.904952   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:12.908634   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:12.908667   48683 retry.go:31] will retry after 25.236445918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:13.223053   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.223119   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.223405   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:13.723248   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.723328   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.723664   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:14.223478   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.223874   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:14.223925   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:14.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.222959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.582577   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:15.646326   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:15.649856   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.649887   48683 retry.go:31] will retry after 20.09954841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.723221   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.723656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.222526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.222836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.723594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.723935   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:16.723996   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:17.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.222939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:17.722495   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.722573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.722891   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.722663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:19.222693   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.222763   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:19.223069   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:19.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.223016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.723476   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.723548   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.723815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:21.222793   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.222863   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.223194   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:21.223251   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:21.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.722963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.222954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.222759   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.722801   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.722870   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.723132   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:23.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:24.222563   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:24.722536   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.722956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.223202   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.223267   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.723347   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.723448   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:25.723881   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:26.222857   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.222930   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.223262   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:26.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.722570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.222528   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.222598   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.222916   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.722616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.722695   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.723043   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:28.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.222622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.222922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:28.222981   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:28.722606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.723095   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.722674   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.722742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:30.222789   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.222864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.223189   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:30.223256   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:30.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.223492   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.223567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.222773   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.223092   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.722517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.722582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.722842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:32.722882   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:33.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:33.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.722591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.223311   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.223394   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.223656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.723571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:34.723969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:35.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.222962   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.722532   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.722854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.750369   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:35.818338   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818385   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818494   48683 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:36.223177   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.223245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.223588   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:36.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.723369   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:37.223358   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.223441   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.223795   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:37.223851   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:37.723459   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.723923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.145414   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:38.206093   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210075   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210171   48683 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:38.213345   48683 out.go:179] * Enabled addons: 
	I1206 08:48:38.217127   48683 addons.go:530] duration metric: took 1m22.464883403s for enable addons: enabled=[]
	I1206 08:48:38.223238   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.223680   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.723466   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.723871   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.222501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.722607   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.723013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:39.723066   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:40.222676   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.222756   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:40.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.722649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.223120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.223622   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.723403   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.723475   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:41.723873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:42.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:42.722684   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.723129   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.222817   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.222915   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.223184   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:44.222598   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.223013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:44.223067   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:44.722714   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.222844   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.222932   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.223348   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.723174   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.723260   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.723605   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.222507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.222918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.722952   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:46.723007   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:47.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:47.722496   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.722563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.722826   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.222616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.722784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.723121   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:48.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:49.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.222915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:49.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.722727   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.723082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.222645   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.222761   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.223073   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.722569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:51.222952   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.223025   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.223425   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:51.223480   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:51.723105   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.723185   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.723538   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.223323   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.223409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.223689   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.722451   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.222606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.223017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.722735   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.722801   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.723122   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:53.723177   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:54.222846   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.222924   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.223260   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:54.722973   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.723056   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.723447   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.223281   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.223354   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.223701   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.723485   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.723577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.723911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:55.723962   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:56.222980   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.223059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.223408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:56.723182   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.723251   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.723637   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.223421   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.223498   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.722566   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:58.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.222866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:58.222905   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:58.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.722681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.723002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.222616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.222687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.722925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:00.222639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.222712   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:00.223060   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:00.722635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.723063   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.223028   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.223234   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.223616   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.723323   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.723798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:02.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.223571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:02.223997   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:02.722537   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.722919   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.222635   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.222942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.722533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.222483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.222897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.722446   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.722517   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:04.722879   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:05.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.222673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:05.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.723669   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.223552   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.223906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.722590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:06.722967   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:07.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.222610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.222868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:07.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.723006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.222578   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.222672   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.722492   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.722560   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:09.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:09.223046   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:09.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.222600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:11.222977   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.223048   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.224357   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1206 08:49:11.224412   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:11.722526   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.722867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.222693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.223079   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.722676   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.722753   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.723090   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.222608   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.722719   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.723062   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:13.723117   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:14.222784   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.222858   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.722588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.722847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.222870   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.222963   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.722751   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.722830   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:15.723220   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:16.223389   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.223501   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.223841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:16.723482   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.723936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.222580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.722456   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.722830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:18.222500   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.222575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:18.222970   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:18.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.722957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.223415   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.223481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.223744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.723518   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.723592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.723932   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:20.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.222604   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:20.223052   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:20.723464   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.723877   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.222834   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.222916   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.223278   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:22.222535   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:22.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:22.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.723051   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.222710   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:24.222593   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.222679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.223059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:24.223115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:24.722807   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.722887   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.723288   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.223044   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.223114   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.223419   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.723206   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.723645   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.222468   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.222888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.722538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.722868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:26.722924   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:27.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.222966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:27.722668   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.722745   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.723116   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.222806   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.222880   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.223155   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.723088   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:28.723155   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:29.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.222755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:29.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.722634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.722895   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.222645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.222996   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.722682   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.722768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.723166   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:30.723221   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:31.223009   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.223094   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.223410   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:31.723177   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.723629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.223466   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.722617   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.722984   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:33.222572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.222977   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:33.223031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:33.722723   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.722796   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.723147   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.222719   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.222791   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.223074   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.722746   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.722818   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.723175   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:35.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.222977   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.223336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:35.223421   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:35.723153   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.723223   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.723599   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.223510   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.223602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.223964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.222842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.722645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:37.723047   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:38.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.223119   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:38.722610   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.222653   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.223084   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.722817   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.723225   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:39.723274   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:40.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.223023   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:40.722742   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.722820   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.723169   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.222970   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.223060   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.723194   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.723270   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.723557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:41.723610   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:42.223426   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.223508   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.223855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:42.722579   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.722655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.723008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.222519   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.222591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.222864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.722618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.722917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:44.222612   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.223025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:44.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:44.722483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.722565   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:46.223140   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.223214   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:46.223591   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:46.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.723406   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.723743   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.222537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.722771   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.722869   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.723227   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:48.723289   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:49.222922   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.223256   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:49.723128   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.723204   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.723574   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.223491   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.223824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.722515   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.722856   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:51.223538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.223610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.223931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:51.223984   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:51.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.722574   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.222457   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.222799   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.222986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.723440   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.723514   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.723868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:53.723922   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:54.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.222982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:54.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.723007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.222545   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.222641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.222936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.723009   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:56.223162   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.223235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.223592   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:56.223647   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:56.723346   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.723440   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.223563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.224002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.722697   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.723097   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.222803   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.222876   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:58.723019   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:59.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:59.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.723878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.222804   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.223133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.722691   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.723059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:00.723115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:01.222895   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.223286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:01.722600   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.723014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.722815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:03.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.222573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.222946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:03.222994   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:03.722541   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.222608   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.222676   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.722602   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:05.222818   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.222895   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.223192   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:05.223237   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:05.722878   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.722947   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.723266   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.223357   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.223444   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.223770   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.722470   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.722567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.722904   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.222961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.722668   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.723032   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:07.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:08.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:08.722770   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.722843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.723145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.222860   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.722567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:10.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.222734   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.223056   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:10.223102   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:10.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.722688   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.723026   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.222874   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.222955   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.223305   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.723062   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.723127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.723408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:12.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.223261   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.223620   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:12.223677   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:12.723275   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.723355   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.223538   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.223808   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.722888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.222590   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.222669   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.722580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.722918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:14.722969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:15.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:15.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.223255   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.223535   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.723326   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.723416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.723757   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:16.723819   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:17.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.222577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.222914   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:17.722474   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.722547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.722850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.222583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.722776   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.723111   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:19.222505   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.222594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.222859   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:19.222907   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:19.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.222684   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.223168   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.722431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.722497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.722767   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:21.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:21.223132   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:21.722822   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.723237   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.222916   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.222997   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.223321   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.723124   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.723201   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.723551   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:23.223344   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.223446   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.223810   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:23.223863   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:23.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.222565   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.722582   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.723045   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.222675   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.222956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.722490   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.722558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.722858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:25.722902   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:26.222992   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.223066   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:26.723227   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.723619   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.223499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:27.723024   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:28.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.222853   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:28.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.722950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.722755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.723172   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:29.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:30.222914   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.222992   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:30.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.722926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.222879   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.223214   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:32.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:32.222979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:32.722487   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.722557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.722887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:34.723033   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:35.222710   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.223174   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:35.722627   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.223126   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.223207   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.223553   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.723209   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.723639   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:36.723696   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:37.223303   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.223402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.223672   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:37.723463   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.723537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.723869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.222903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.723171   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.723241   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.723601   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:39.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.223483   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.223848   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:39.223901   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:39.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.222657   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.722746   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.723061   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.222889   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.222968   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.223319   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.722996   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.723258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:41.723307   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:42.223105   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.223674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:42.723351   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.723771   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.222466   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.222542   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.222830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.723509   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.723588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.723950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:43.724004   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:44.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.222958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:44.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.722910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.222798   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.223002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.223897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:46.224920   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.224987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.225286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:46.225327   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:46.723066   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.723140   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.723458   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.223236   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.223694   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.723477   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.723544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.723809   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.222493   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:48.723048   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:49.222689   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:49.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.222657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.223048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.722578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:51.222958   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.223044   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.223428   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:51.223484   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:51.723238   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.723326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.723667   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.223431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.223506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.223847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.723473   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.723546   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.723905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.222927   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.723407   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.723477   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.723780   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:53.723831   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:54.223258   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.223334   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.223684   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:54.723481   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.723559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.723887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.222605   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.222908   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:56.223029   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.223100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.223448   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:56.223504   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:56.723288   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.723362   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.723641   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.223504   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.223865   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.722469   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.722544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.722884   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.222923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.722693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.723034   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:58.723089   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:59.222617   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.223050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:59.722758   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.722838   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.222649   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.222741   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.722924   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.723002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.723336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:00.723407   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:01.223151   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.223227   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.223550   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:01.723316   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.723407   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.723750   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.222569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.722882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:03.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:03.223074   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:03.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.222840   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.722628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.722970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:05.222698   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.222780   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.223093   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:05.223142   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:05.722471   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.722549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.722864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.223041   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.223120   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.223579   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.723396   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.723824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.222893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.722605   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.722673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.723011   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:07.723085   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:08.222754   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.222842   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.223191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:08.722662   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.722736   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.723038   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.222745   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.223142   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.722861   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.723235   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:09.723279   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:10.222634   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:10.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.722638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.722937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.223528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.223600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.723112   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.723177   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.723461   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:11.723503   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:12.223246   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.223682   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:12.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.723593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.723946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.222536   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.722958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:14.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:14.223043   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:14.722452   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.722845   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.222537   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.222613   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.722573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.723240   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:16.222683   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.222764   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:16.223086   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:16.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.723021   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.722565   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.722636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:18.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:19.222506   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.222917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:19.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.222654   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.222725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.223047   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.722782   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.723050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:20.723099   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:21.223085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.223561   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:21.723342   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.723426   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.723759   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.223473   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.722720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.723089   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:22.723144   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:23.222822   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.222899   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.223255   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:23.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.722930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.722672   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.723136   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:24.723192   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:25.222510   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:25.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.223082   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.223153   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.223523   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.723172   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.723245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.723542   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:26.723585   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:27.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.223474   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.223854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:27.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.722624   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.223564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.722696   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.723057   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:29.222647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.223145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:29.223197   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:29.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.222740   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:31.222877   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.223216   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:31.223257   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:31.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.722986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.222702   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.222778   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.222731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.722763   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.722837   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.723186   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:33.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:34.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:34.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.222597   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.223031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.722870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:36.223103   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.223184   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.223557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:36.223614   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:36.723236   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.723314   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.723677   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.223456   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.223536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.223814   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.722521   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.222667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.222743   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.722867   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.722943   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.723253   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:38.723310   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:39.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:39.722686   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.723127   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.222805   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.222893   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.223247   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:41.223068   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.223147   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.223511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:41.223567   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:41.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.723402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.723663   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.223489   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.223566   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.223933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.723031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.222740   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.222816   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.223098   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.722622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:43.723044   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:44.222550   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:44.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.222681   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.222768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.223254   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.723085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.723156   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.723536   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:45.723592   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:46.223392   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.223709   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:46.722472   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.722550   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.722572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:48.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.222994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:48.223050   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:48.722729   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.722814   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.723224   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.222841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.222560   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.222640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.722647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.723039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:50.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:51.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.222961   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:51.723095   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.723527   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.223293   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.223365   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.223638   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.723480   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.723556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.723872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:52.723957   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:53.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:53.722667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.722737   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.222637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:55.223524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.223593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.223922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:55.223979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:55.722631   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.722706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.723040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.223219   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.223289   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.223644   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.723321   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.723712   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.223501   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.223578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.223899   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.722570   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.722944   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:57.722991   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:58.222513   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.222843   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:58.722524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.722929   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.722692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.723017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:59.723091   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:00.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:00.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.722961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.222893   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.222975   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.223252   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:02.222554   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.222634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.222965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:02.223025   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:02.722664   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.722731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.222669   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.722717   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.222582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.222867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:04.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:05.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.222660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.223001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:05.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.722529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.722796   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.223039   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.223118   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.223488   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.723240   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.723313   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.723661   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:06.723717   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:07.223477   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.223559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.223842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:07.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.223018   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.722509   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:09.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.222633   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:09.223037   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:09.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.723150   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.222456   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.222522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:11.222767   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.222843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:11.223242   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:11.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.222662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.722612   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.722687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.723066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.222774   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.222844   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.722797   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.722874   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.723220   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:13.723278   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:14.222938   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.223011   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.223370   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:14.723151   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.723218   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.723511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.223282   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.223353   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.223716   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.723508   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.723596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.723933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:15.723988   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:16.223075   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.223148   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.223467   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:16.723393   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.723870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.222594   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.222670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.222997   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.722611   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:18.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.222665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:18.223068   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:18.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.222898   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.722641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.722493   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.722564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.722881   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:20.722931   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:21.222827   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.222898   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.223270   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:21.722982   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.723059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.723422   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.223208   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.223282   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.223570   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.723366   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.723481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.723885   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:22.723946   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:23.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:23.722590   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.722671   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.723025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:25.223036   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:25.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.722630   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.722980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.223051   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.223127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.223495   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.723130   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.723210   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.723481   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:27.223253   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.223710   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:27.223764   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:27.723405   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.723490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.723850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.722973   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.222678   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.222749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.722594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.722851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:29.722889   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:30.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.222697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.223066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:30.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.222962   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.223058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.223464   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.723093   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.723169   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.723529   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:31.723588   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:32.223231   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.223306   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.223675   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:32.723451   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.723529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.723844   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.722693   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.722771   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.723118   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:34.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.222571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.222833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:34.222873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:34.722546   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.723016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.222749   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.223165   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.722851   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.722928   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.723193   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:36.223352   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.223828   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:36.223884   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:36.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.222713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.223007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.222795   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.223113   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.722765   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.722845   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.723210   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:38.723261   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:39.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:39.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.722951   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.222911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.722636   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.722713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:41.222845   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.222923   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.223258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:41.223312   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:41.722994   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.723058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.723414   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.223248   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.223346   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.223858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.722455   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.722526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.722872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:43.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.223489   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.223805   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:43.223855   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:43.722522   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.722966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.222629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:45.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.223076   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.223835   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:45.223976   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:45.722584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.223060   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.223131   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.223436   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.723272   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.723352   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.723748   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.222452   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.222527   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.722563   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:47.722953   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:48.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:48.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.723001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.222559   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.222906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:49.723031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:50.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.223020   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:50.723341   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.723685   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.223482   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.722933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:52.222677   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:52.223122   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:52.722795   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.722878   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.222957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.223005   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.722713   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.722788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.723170   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:54.723229   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:55.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.222912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:55.722613   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.223191   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.223262   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.223629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.723299   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.723703   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:56.723746   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:57.222463   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.222559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.222925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:57.722623   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.723053   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.222555   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.222627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.222882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:59.222596   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.222674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:59.223071   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:59.722703   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.722774   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.222680   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.722902   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.722974   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.723300   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:01.223304   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.223397   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.223655   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:01.223703   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:01.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.723563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.723888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.222658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.223040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.722720   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.722789   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.723094   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.222643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.722718   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.722800   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.723133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:03.723188   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:04.222478   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.222547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.222820   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:04.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.222941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.722915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:06.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.223136   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.223522   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:06.223575   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:06.723193   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.723275   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.723670   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.223474   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.223549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.223817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.222651   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.222735   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.722864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:08.723216   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:09.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:09.722671   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.722749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.723103   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.222760   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.222832   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.223102   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.722631   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.722988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:11.222770   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.222841   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.223177   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:11.223230   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:11.723479   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.723562   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.222547   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.222623   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.222981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.722772   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.723109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:13.723015   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:14.222726   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.222802   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:14.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.722912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.222538   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.722981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:16.222929   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.223005   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:16.223275   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:16.223314   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:16.723228   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.723311   48683 node_ready.go:38] duration metric: took 6m0.000967258s for node "functional-090986" to be "Ready" ...
	I1206 08:53:16.726672   48683 out.go:203] 
	W1206 08:53:16.729718   48683 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 08:53:16.729749   48683 out.go:285] * 
	W1206 08:53:16.732326   48683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:53:16.735459   48683 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:53:24 functional-090986 containerd[5266]: time="2025-12-06T08:53:24.485393663Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:25 functional-090986 containerd[5266]: time="2025-12-06T08:53:25.558263180Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 06 08:53:25 functional-090986 containerd[5266]: time="2025-12-06T08:53:25.560376858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 06 08:53:25 functional-090986 containerd[5266]: time="2025-12-06T08:53:25.567036338Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:25 functional-090986 containerd[5266]: time="2025-12-06T08:53:25.567440259Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:26 functional-090986 containerd[5266]: time="2025-12-06T08:53:26.529633151Z" level=info msg="No images store for sha256:864d91b111549b1a614e1c2b69622472824686140966b61b9bf5ed9bf10b7a66"
	Dec 06 08:53:26 functional-090986 containerd[5266]: time="2025-12-06T08:53:26.531958316Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-090986\""
	Dec 06 08:53:26 functional-090986 containerd[5266]: time="2025-12-06T08:53:26.541003148Z" level=info msg="ImageCreate event name:\"sha256:5294eb1309299d240981eee230965d3e70b3f5d29d3eca33acb510d478dc7d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:26 functional-090986 containerd[5266]: time="2025-12-06T08:53:26.541434097Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:27 functional-090986 containerd[5266]: time="2025-12-06T08:53:27.338817563Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 06 08:53:27 functional-090986 containerd[5266]: time="2025-12-06T08:53:27.341282237Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 06 08:53:27 functional-090986 containerd[5266]: time="2025-12-06T08:53:27.343553435Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 06 08:53:27 functional-090986 containerd[5266]: time="2025-12-06T08:53:27.356083604Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.286132725Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.288633436Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.291637896Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.298280775Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.471801260Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.473941105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.482138732Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.482655683Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.600988204Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.603184650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.614679868Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.615305433Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:53:30.414695    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:30.415094    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:30.416630    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:30.416978    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:30.418495    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> kernel <==
	 08:53:30 up 36 min,  0 user,  load average: 0.39, 0.29, 0.54
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 08:53:27 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:27 functional-090986 kubelet[8995]: E1206 08:53:27.535505    8995 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:27 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:27 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:28 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 06 08:53:28 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:28 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:28 functional-090986 kubelet[9059]: E1206 08:53:28.304869    9059 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:28 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:28 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:28 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 06 08:53:28 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:28 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:28 functional-090986 kubelet[9140]: E1206 08:53:28.985298    9140 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:28 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:28 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:29 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 06 08:53:29 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:29 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:29 functional-090986 kubelet[9161]: E1206 08:53:29.782727    9161 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:29 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:29 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:30 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 06 08:53:30 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:30 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (388.96893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-090986 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-090986 get pods: exit status 1 (107.472473ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-090986 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (299.353974ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-090986 logs -n 25: (1.008881622s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-181746 image build -t localhost/my-image:functional-181746 testdata/build --alsologtostderr                                                  │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format json --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format table --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls                                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ delete         │ -p functional-181746                                                                                                                                    │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ start          │ -p functional-090986 --alsologtostderr -v=8                                                                                                             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:47 UTC │                     │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:latest                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add minikube-local-cache-test:functional-090986                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache delete minikube-local-cache-test:functional-090986                                                                              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl images                                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	│ cache          │ functional-090986 cache reload                                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ kubectl        │ functional-090986 kubectl -- --context functional-090986 get pods                                                                                       │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:47:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:47:11.094911   48683 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:47:11.095050   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095060   48683 out.go:374] Setting ErrFile to fd 2...
	I1206 08:47:11.095065   48683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:47:11.095329   48683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:47:11.095763   48683 out.go:368] Setting JSON to false
	I1206 08:47:11.096588   48683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1782,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:47:11.096668   48683 start.go:143] virtualization:  
	I1206 08:47:11.100026   48683 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:47:11.103775   48683 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:47:11.103977   48683 notify.go:221] Checking for updates...
	I1206 08:47:11.109719   48683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:47:11.112668   48683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:11.115549   48683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:47:11.118516   48683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:47:11.121495   48683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:47:11.124961   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:11.125074   48683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:47:11.149854   48683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:47:11.149988   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.212959   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.203697623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.213084   48683 docker.go:319] overlay module found
	I1206 08:47:11.216243   48683 out.go:179] * Using the docker driver based on existing profile
	I1206 08:47:11.219285   48683 start.go:309] selected driver: docker
	I1206 08:47:11.219311   48683 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.219451   48683 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:47:11.219560   48683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:47:11.284944   48683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 08:47:11.27604915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:47:11.285369   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:11.285438   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:11.285486   48683 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:11.289257   48683 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:47:11.292082   48683 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:47:11.295206   48683 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:47:11.298095   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:11.298152   48683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:47:11.298166   48683 cache.go:65] Caching tarball of preloaded images
	I1206 08:47:11.298170   48683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:47:11.298253   48683 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:47:11.298264   48683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:47:11.298374   48683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:47:11.317301   48683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:47:11.317323   48683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:47:11.317345   48683 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:47:11.317377   48683 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:47:11.317445   48683 start.go:364] duration metric: took 50.847µs to acquireMachinesLock for "functional-090986"
	I1206 08:47:11.317466   48683 start.go:96] Skipping create...Using existing machine configuration
	I1206 08:47:11.317471   48683 fix.go:54] fixHost starting: 
	I1206 08:47:11.317772   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:11.334567   48683 fix.go:112] recreateIfNeeded on functional-090986: state=Running err=<nil>
	W1206 08:47:11.334595   48683 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 08:47:11.337684   48683 out.go:252] * Updating the running docker "functional-090986" container ...
	I1206 08:47:11.337717   48683 machine.go:94] provisionDockerMachine start ...
	I1206 08:47:11.337795   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.354534   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.354869   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.354883   48683 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:47:11.507058   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.507088   48683 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:47:11.507161   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.525196   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.525520   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.525537   48683 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:47:11.684471   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:47:11.684556   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:11.702187   48683 main.go:143] libmachine: Using SSH client type: native
	I1206 08:47:11.702515   48683 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:47:11.702540   48683 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:47:11.859622   48683 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:47:11.859650   48683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:47:11.859671   48683 ubuntu.go:190] setting up certificates
	I1206 08:47:11.859680   48683 provision.go:84] configureAuth start
	I1206 08:47:11.859747   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:11.877706   48683 provision.go:143] copyHostCerts
	I1206 08:47:11.877750   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877787   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:47:11.877800   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:47:11.877873   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:47:11.877976   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.877997   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:47:11.878007   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:47:11.878035   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:47:11.878088   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878108   48683 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:47:11.878114   48683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:47:11.878140   48683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:47:11.878192   48683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:47:12.018564   48683 provision.go:177] copyRemoteCerts
	I1206 08:47:12.018632   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:47:12.018672   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.036577   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.143156   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 08:47:12.143226   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:47:12.160243   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 08:47:12.160303   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 08:47:12.177568   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 08:47:12.177628   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:47:12.194504   48683 provision.go:87] duration metric: took 334.802128ms to configureAuth
	I1206 08:47:12.194543   48683 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:47:12.194717   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:12.194725   48683 machine.go:97] duration metric: took 857.000255ms to provisionDockerMachine
	I1206 08:47:12.194732   48683 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:47:12.194743   48683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:47:12.194796   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:47:12.194842   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.212073   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.315270   48683 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:47:12.318678   48683 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 08:47:12.318701   48683 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 08:47:12.318706   48683 command_runner.go:130] > VERSION_ID="12"
	I1206 08:47:12.318711   48683 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 08:47:12.318717   48683 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 08:47:12.318720   48683 command_runner.go:130] > ID=debian
	I1206 08:47:12.318724   48683 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 08:47:12.318730   48683 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 08:47:12.318735   48683 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 08:47:12.318975   48683 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:47:12.319002   48683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:47:12.319013   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:47:12.319072   48683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:47:12.319161   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:47:12.319172   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /etc/ssl/certs/42922.pem
	I1206 08:47:12.319246   48683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:47:12.319253   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> /etc/test/nested/copy/4292/hosts
	I1206 08:47:12.319298   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:47:12.327031   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:12.344679   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:47:12.363077   48683 start.go:296] duration metric: took 168.329595ms for postStartSetup
	I1206 08:47:12.363152   48683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:47:12.363210   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.380353   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.487060   48683 command_runner.go:130] > 11%
	I1206 08:47:12.487699   48683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:47:12.493338   48683 command_runner.go:130] > 174G
	I1206 08:47:12.494716   48683 fix.go:56] duration metric: took 1.177238165s for fixHost
	I1206 08:47:12.494741   48683 start.go:83] releasing machines lock for "functional-090986", held for 1.177286419s
	I1206 08:47:12.494813   48683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:47:12.512960   48683 ssh_runner.go:195] Run: cat /version.json
	I1206 08:47:12.513022   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.513272   48683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:47:12.513331   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:12.541090   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.554766   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:12.647127   48683 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 08:47:12.647264   48683 ssh_runner.go:195] Run: systemctl --version
	I1206 08:47:12.750867   48683 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 08:47:12.751021   48683 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 08:47:12.751059   48683 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 08:47:12.751151   48683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 08:47:12.755609   48683 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 08:47:12.756103   48683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:47:12.756176   48683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:47:12.764393   48683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 08:47:12.764420   48683 start.go:496] detecting cgroup driver to use...
	I1206 08:47:12.764452   48683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:47:12.764507   48683 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:47:12.779951   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:47:12.793243   48683 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:47:12.793324   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:47:12.809005   48683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:47:12.823043   48683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:47:12.939696   48683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:47:13.060632   48683 docker.go:234] disabling docker service ...
	I1206 08:47:13.060721   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:47:13.078332   48683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:47:13.093719   48683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:47:13.229319   48683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:47:13.368814   48683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:47:13.381432   48683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:47:13.395011   48683 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 08:47:13.396419   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:47:13.405770   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:47:13.415310   48683 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:47:13.415505   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:47:13.424963   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.433399   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:47:13.442072   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:47:13.450816   48683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:47:13.458824   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:47:13.467776   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:47:13.477145   48683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:47:13.486457   48683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:47:13.493910   48683 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 08:47:13.494986   48683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:47:13.503356   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:13.622996   48683 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:47:13.753042   48683 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:47:13.753133   48683 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:47:13.757647   48683 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 08:47:13.757672   48683 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 08:47:13.757681   48683 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1206 08:47:13.757689   48683 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:13.757724   48683 command_runner.go:130] > Access: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757736   48683 command_runner.go:130] > Modify: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757742   48683 command_runner.go:130] > Change: 2025-12-06 08:47:13.700132218 +0000
	I1206 08:47:13.757746   48683 command_runner.go:130] >  Birth: -
	I1206 08:47:13.757803   48683 start.go:564] Will wait 60s for crictl version
	I1206 08:47:13.757883   48683 ssh_runner.go:195] Run: which crictl
	I1206 08:47:13.761846   48683 command_runner.go:130] > /usr/local/bin/crictl
	I1206 08:47:13.761974   48683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:47:13.786269   48683 command_runner.go:130] > Version:  0.1.0
	I1206 08:47:13.786289   48683 command_runner.go:130] > RuntimeName:  containerd
	I1206 08:47:13.786295   48683 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 08:47:13.786302   48683 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 08:47:13.788604   48683 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:47:13.788708   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.809864   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.811926   48683 ssh_runner.go:195] Run: containerd --version
	I1206 08:47:13.831700   48683 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 08:47:13.839817   48683 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:47:13.842721   48683 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:47:13.858999   48683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:47:13.862710   48683 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 08:47:13.862939   48683 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:47:13.863057   48683 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:47:13.863132   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.889556   48683 command_runner.go:130] > {
	I1206 08:47:13.889580   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.889586   48683 command_runner.go:130] >     {
	I1206 08:47:13.889601   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.889607   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889612   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.889616   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889619   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889628   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.889635   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889640   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.889652   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889657   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889661   48683 command_runner.go:130] >     },
	I1206 08:47:13.889664   48683 command_runner.go:130] >     {
	I1206 08:47:13.889672   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.889676   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889681   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.889687   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889691   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889707   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.889710   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889715   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.889725   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889729   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889733   48683 command_runner.go:130] >     },
	I1206 08:47:13.889736   48683 command_runner.go:130] >     {
	I1206 08:47:13.889743   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.889752   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889767   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.889770   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889777   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889785   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.889792   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889796   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.889800   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.889808   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889815   48683 command_runner.go:130] >     },
	I1206 08:47:13.889818   48683 command_runner.go:130] >     {
	I1206 08:47:13.889825   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.889829   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889837   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.889841   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889844   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889852   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.889863   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889867   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.889871   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889875   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889885   48683 command_runner.go:130] >       },
	I1206 08:47:13.889889   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889892   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889896   48683 command_runner.go:130] >     },
	I1206 08:47:13.889899   48683 command_runner.go:130] >     {
	I1206 08:47:13.889906   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.889912   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.889918   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.889920   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889925   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.889933   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.889937   48683 command_runner.go:130] >       ],
	I1206 08:47:13.889945   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.889949   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.889960   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.889964   48683 command_runner.go:130] >       },
	I1206 08:47:13.889970   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.889975   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.889987   48683 command_runner.go:130] >     },
	I1206 08:47:13.890022   48683 command_runner.go:130] >     {
	I1206 08:47:13.890033   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.890037   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890043   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.890049   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890054   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890064   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.890070   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890075   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.890078   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890082   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890087   48683 command_runner.go:130] >       },
	I1206 08:47:13.890092   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890098   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890102   48683 command_runner.go:130] >     },
	I1206 08:47:13.890105   48683 command_runner.go:130] >     {
	I1206 08:47:13.890112   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.890115   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890121   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.890124   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890139   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.890145   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890149   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.890153   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890156   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890159   48683 command_runner.go:130] >     },
	I1206 08:47:13.890170   48683 command_runner.go:130] >     {
	I1206 08:47:13.890177   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.890181   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890187   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.890190   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890206   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.890215   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890223   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.890228   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890231   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.890235   48683 command_runner.go:130] >       },
	I1206 08:47:13.890239   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890250   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.890254   48683 command_runner.go:130] >     },
	I1206 08:47:13.890257   48683 command_runner.go:130] >     {
	I1206 08:47:13.890264   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.890272   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.890277   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.890280   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890284   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.890291   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.890294   48683 command_runner.go:130] >       ],
	I1206 08:47:13.890298   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.890305   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.890310   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.890315   48683 command_runner.go:130] >       },
	I1206 08:47:13.890319   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.890331   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.890335   48683 command_runner.go:130] >     }
	I1206 08:47:13.890337   48683 command_runner.go:130] >   ]
	I1206 08:47:13.890340   48683 command_runner.go:130] > }
	I1206 08:47:13.892630   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.892653   48683 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:47:13.892734   48683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:47:13.915064   48683 command_runner.go:130] > {
	I1206 08:47:13.915085   48683 command_runner.go:130] >   "images":  [
	I1206 08:47:13.915091   48683 command_runner.go:130] >     {
	I1206 08:47:13.915102   48683 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 08:47:13.915109   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915115   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 08:47:13.915119   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915128   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915142   48683 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 08:47:13.915149   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915153   48683 command_runner.go:130] >       "size":  "40636774",
	I1206 08:47:13.915157   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915161   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915164   48683 command_runner.go:130] >     },
	I1206 08:47:13.915167   48683 command_runner.go:130] >     {
	I1206 08:47:13.915178   48683 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 08:47:13.915184   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915189   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 08:47:13.915193   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915197   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915208   48683 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 08:47:13.915214   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915218   48683 command_runner.go:130] >       "size":  "8034419",
	I1206 08:47:13.915222   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915225   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915228   48683 command_runner.go:130] >     },
	I1206 08:47:13.915231   48683 command_runner.go:130] >     {
	I1206 08:47:13.915238   48683 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 08:47:13.915245   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915251   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 08:47:13.915254   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915262   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915270   48683 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 08:47:13.915275   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915279   48683 command_runner.go:130] >       "size":  "21168808",
	I1206 08:47:13.915286   48683 command_runner.go:130] >       "username":  "nonroot",
	I1206 08:47:13.915291   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915295   48683 command_runner.go:130] >     },
	I1206 08:47:13.915298   48683 command_runner.go:130] >     {
	I1206 08:47:13.915305   48683 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 08:47:13.915311   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915320   48683 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 08:47:13.915324   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915328   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915338   48683 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 08:47:13.915341   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915345   48683 command_runner.go:130] >       "size":  "21136588",
	I1206 08:47:13.915349   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915352   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915359   48683 command_runner.go:130] >       },
	I1206 08:47:13.915363   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915410   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915414   48683 command_runner.go:130] >     },
	I1206 08:47:13.915418   48683 command_runner.go:130] >     {
	I1206 08:47:13.915424   48683 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 08:47:13.915428   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915434   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 08:47:13.915437   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915441   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915448   48683 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 08:47:13.915451   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915455   48683 command_runner.go:130] >       "size":  "24678359",
	I1206 08:47:13.915458   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915471   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915474   48683 command_runner.go:130] >       },
	I1206 08:47:13.915478   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915481   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915484   48683 command_runner.go:130] >     },
	I1206 08:47:13.915487   48683 command_runner.go:130] >     {
	I1206 08:47:13.915494   48683 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 08:47:13.915497   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915503   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 08:47:13.915506   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915509   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915523   48683 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 08:47:13.915526   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915530   48683 command_runner.go:130] >       "size":  "20661043",
	I1206 08:47:13.915534   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915540   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915543   48683 command_runner.go:130] >       },
	I1206 08:47:13.915547   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915550   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915553   48683 command_runner.go:130] >     },
	I1206 08:47:13.915556   48683 command_runner.go:130] >     {
	I1206 08:47:13.915563   48683 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 08:47:13.915580   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915585   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 08:47:13.915588   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915592   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915601   48683 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 08:47:13.915608   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915612   48683 command_runner.go:130] >       "size":  "22429671",
	I1206 08:47:13.915616   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915620   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915622   48683 command_runner.go:130] >     },
	I1206 08:47:13.915626   48683 command_runner.go:130] >     {
	I1206 08:47:13.915635   48683 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 08:47:13.915649   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915655   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 08:47:13.915658   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915662   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915670   48683 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 08:47:13.915676   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915680   48683 command_runner.go:130] >       "size":  "15391364",
	I1206 08:47:13.915684   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915687   48683 command_runner.go:130] >         "value":  "0"
	I1206 08:47:13.915691   48683 command_runner.go:130] >       },
	I1206 08:47:13.915699   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915706   48683 command_runner.go:130] >       "pinned":  false
	I1206 08:47:13.915710   48683 command_runner.go:130] >     },
	I1206 08:47:13.915713   48683 command_runner.go:130] >     {
	I1206 08:47:13.915720   48683 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 08:47:13.915723   48683 command_runner.go:130] >       "repoTags":  [
	I1206 08:47:13.915728   48683 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 08:47:13.915731   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915735   48683 command_runner.go:130] >       "repoDigests":  [
	I1206 08:47:13.915746   48683 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 08:47:13.915752   48683 command_runner.go:130] >       ],
	I1206 08:47:13.915756   48683 command_runner.go:130] >       "size":  "267939",
	I1206 08:47:13.915760   48683 command_runner.go:130] >       "uid":  {
	I1206 08:47:13.915764   48683 command_runner.go:130] >         "value":  "65535"
	I1206 08:47:13.915777   48683 command_runner.go:130] >       },
	I1206 08:47:13.915781   48683 command_runner.go:130] >       "username":  "",
	I1206 08:47:13.915785   48683 command_runner.go:130] >       "pinned":  true
	I1206 08:47:13.915790   48683 command_runner.go:130] >     }
	I1206 08:47:13.915793   48683 command_runner.go:130] >   ]
	I1206 08:47:13.915796   48683 command_runner.go:130] > }
	I1206 08:47:13.917976   48683 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:47:13.917998   48683 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:47:13.918006   48683 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:47:13.918108   48683 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:47:13.918181   48683 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:47:13.946472   48683 command_runner.go:130] > {
	I1206 08:47:13.946489   48683 command_runner.go:130] >   "cniconfig": {
	I1206 08:47:13.946494   48683 command_runner.go:130] >     "Networks": [
	I1206 08:47:13.946497   48683 command_runner.go:130] >       {
	I1206 08:47:13.946502   48683 command_runner.go:130] >         "Config": {
	I1206 08:47:13.946507   48683 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 08:47:13.946512   48683 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 08:47:13.946516   48683 command_runner.go:130] >           "Plugins": [
	I1206 08:47:13.946520   48683 command_runner.go:130] >             {
	I1206 08:47:13.946524   48683 command_runner.go:130] >               "Network": {
	I1206 08:47:13.946529   48683 command_runner.go:130] >                 "ipam": {},
	I1206 08:47:13.946537   48683 command_runner.go:130] >                 "type": "loopback"
	I1206 08:47:13.946541   48683 command_runner.go:130] >               },
	I1206 08:47:13.946554   48683 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 08:47:13.946558   48683 command_runner.go:130] >             }
	I1206 08:47:13.946561   48683 command_runner.go:130] >           ],
	I1206 08:47:13.946573   48683 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 08:47:13.946581   48683 command_runner.go:130] >         },
	I1206 08:47:13.946586   48683 command_runner.go:130] >         "IFName": "lo"
	I1206 08:47:13.946590   48683 command_runner.go:130] >       }
	I1206 08:47:13.946593   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946597   48683 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 08:47:13.946601   48683 command_runner.go:130] >     "PluginDirs": [
	I1206 08:47:13.946605   48683 command_runner.go:130] >       "/opt/cni/bin"
	I1206 08:47:13.946609   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946613   48683 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 08:47:13.946617   48683 command_runner.go:130] >     "Prefix": "eth"
	I1206 08:47:13.946620   48683 command_runner.go:130] >   },
	I1206 08:47:13.946623   48683 command_runner.go:130] >   "config": {
	I1206 08:47:13.946627   48683 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 08:47:13.946630   48683 command_runner.go:130] >       "/etc/cdi",
	I1206 08:47:13.946636   48683 command_runner.go:130] >       "/var/run/cdi"
	I1206 08:47:13.946640   48683 command_runner.go:130] >     ],
	I1206 08:47:13.946643   48683 command_runner.go:130] >     "cni": {
	I1206 08:47:13.946646   48683 command_runner.go:130] >       "binDir": "",
	I1206 08:47:13.946650   48683 command_runner.go:130] >       "binDirs": [
	I1206 08:47:13.946653   48683 command_runner.go:130] >         "/opt/cni/bin"
	I1206 08:47:13.946656   48683 command_runner.go:130] >       ],
	I1206 08:47:13.946661   48683 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 08:47:13.946665   48683 command_runner.go:130] >       "confTemplate": "",
	I1206 08:47:13.946668   48683 command_runner.go:130] >       "ipPref": "",
	I1206 08:47:13.946672   48683 command_runner.go:130] >       "maxConfNum": 1,
	I1206 08:47:13.946676   48683 command_runner.go:130] >       "setupSerially": false,
	I1206 08:47:13.946680   48683 command_runner.go:130] >       "useInternalLoopback": false
	I1206 08:47:13.946683   48683 command_runner.go:130] >     },
	I1206 08:47:13.946688   48683 command_runner.go:130] >     "containerd": {
	I1206 08:47:13.946696   48683 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 08:47:13.946701   48683 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 08:47:13.946706   48683 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 08:47:13.946710   48683 command_runner.go:130] >       "runtimes": {
	I1206 08:47:13.946713   48683 command_runner.go:130] >         "runc": {
	I1206 08:47:13.946718   48683 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 08:47:13.946722   48683 command_runner.go:130] >           "PodAnnotations": null,
	I1206 08:47:13.946728   48683 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 08:47:13.946733   48683 command_runner.go:130] >           "cgroupWritable": false,
	I1206 08:47:13.946738   48683 command_runner.go:130] >           "cniConfDir": "",
	I1206 08:47:13.946742   48683 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 08:47:13.946745   48683 command_runner.go:130] >           "io_type": "",
	I1206 08:47:13.946748   48683 command_runner.go:130] >           "options": {
	I1206 08:47:13.946752   48683 command_runner.go:130] >             "BinaryName": "",
	I1206 08:47:13.946756   48683 command_runner.go:130] >             "CriuImagePath": "",
	I1206 08:47:13.946761   48683 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 08:47:13.946764   48683 command_runner.go:130] >             "IoGid": 0,
	I1206 08:47:13.946768   48683 command_runner.go:130] >             "IoUid": 0,
	I1206 08:47:13.946772   48683 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 08:47:13.946776   48683 command_runner.go:130] >             "Root": "",
	I1206 08:47:13.946780   48683 command_runner.go:130] >             "ShimCgroup": "",
	I1206 08:47:13.946784   48683 command_runner.go:130] >             "SystemdCgroup": false
	I1206 08:47:13.946787   48683 command_runner.go:130] >           },
	I1206 08:47:13.946793   48683 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 08:47:13.946799   48683 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 08:47:13.946803   48683 command_runner.go:130] >           "runtimePath": "",
	I1206 08:47:13.946808   48683 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 08:47:13.946812   48683 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 08:47:13.946816   48683 command_runner.go:130] >           "snapshotter": ""
	I1206 08:47:13.946820   48683 command_runner.go:130] >         }
	I1206 08:47:13.946823   48683 command_runner.go:130] >       }
	I1206 08:47:13.946826   48683 command_runner.go:130] >     },
	I1206 08:47:13.946836   48683 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 08:47:13.946848   48683 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 08:47:13.946854   48683 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 08:47:13.946858   48683 command_runner.go:130] >     "disableApparmor": false,
	I1206 08:47:13.946863   48683 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 08:47:13.946867   48683 command_runner.go:130] >     "disableProcMount": false,
	I1206 08:47:13.946871   48683 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 08:47:13.946874   48683 command_runner.go:130] >     "enableCDI": true,
	I1206 08:47:13.946878   48683 command_runner.go:130] >     "enableSelinux": false,
	I1206 08:47:13.946883   48683 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 08:47:13.946887   48683 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 08:47:13.946891   48683 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 08:47:13.946896   48683 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 08:47:13.946900   48683 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 08:47:13.946905   48683 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 08:47:13.946909   48683 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 08:47:13.946917   48683 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946922   48683 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 08:47:13.946928   48683 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 08:47:13.946932   48683 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 08:47:13.946937   48683 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 08:47:13.946940   48683 command_runner.go:130] >   },
	I1206 08:47:13.946943   48683 command_runner.go:130] >   "features": {
	I1206 08:47:13.946948   48683 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 08:47:13.946951   48683 command_runner.go:130] >   },
	I1206 08:47:13.946955   48683 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 08:47:13.946964   48683 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946974   48683 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 08:47:13.946977   48683 command_runner.go:130] >   "runtimeHandlers": [
	I1206 08:47:13.946980   48683 command_runner.go:130] >     {
	I1206 08:47:13.946984   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.946988   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.946992   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.946996   48683 command_runner.go:130] >       }
	I1206 08:47:13.947002   48683 command_runner.go:130] >     },
	I1206 08:47:13.947006   48683 command_runner.go:130] >     {
	I1206 08:47:13.947009   48683 command_runner.go:130] >       "features": {
	I1206 08:47:13.947015   48683 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 08:47:13.947019   48683 command_runner.go:130] >         "user_namespaces": true
	I1206 08:47:13.947022   48683 command_runner.go:130] >       },
	I1206 08:47:13.947026   48683 command_runner.go:130] >       "name": "runc"
	I1206 08:47:13.947029   48683 command_runner.go:130] >     }
	I1206 08:47:13.947032   48683 command_runner.go:130] >   ],
	I1206 08:47:13.947035   48683 command_runner.go:130] >   "status": {
	I1206 08:47:13.947039   48683 command_runner.go:130] >     "conditions": [
	I1206 08:47:13.947042   48683 command_runner.go:130] >       {
	I1206 08:47:13.947046   48683 command_runner.go:130] >         "message": "",
	I1206 08:47:13.947050   48683 command_runner.go:130] >         "reason": "",
	I1206 08:47:13.947053   48683 command_runner.go:130] >         "status": true,
	I1206 08:47:13.947059   48683 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 08:47:13.947062   48683 command_runner.go:130] >       },
	I1206 08:47:13.947065   48683 command_runner.go:130] >       {
	I1206 08:47:13.947072   48683 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 08:47:13.947081   48683 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 08:47:13.947085   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947089   48683 command_runner.go:130] >         "type": "NetworkReady"
	I1206 08:47:13.947091   48683 command_runner.go:130] >       },
	I1206 08:47:13.947094   48683 command_runner.go:130] >       {
	I1206 08:47:13.947118   48683 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 08:47:13.947123   48683 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 08:47:13.947129   48683 command_runner.go:130] >         "status": false,
	I1206 08:47:13.947134   48683 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 08:47:13.947137   48683 command_runner.go:130] >       }
	I1206 08:47:13.947139   48683 command_runner.go:130] >     ]
	I1206 08:47:13.947142   48683 command_runner.go:130] >   }
	I1206 08:47:13.947144   48683 command_runner.go:130] > }
	I1206 08:47:13.947502   48683 cni.go:84] Creating CNI manager for ""
	I1206 08:47:13.947519   48683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:47:13.947541   48683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:47:13.947564   48683 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:47:13.947673   48683 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:47:13.947742   48683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:47:13.955523   48683 command_runner.go:130] > kubeadm
	I1206 08:47:13.955542   48683 command_runner.go:130] > kubectl
	I1206 08:47:13.955546   48683 command_runner.go:130] > kubelet
	I1206 08:47:13.955560   48683 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:47:13.955622   48683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:47:13.963242   48683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:47:13.976514   48683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:47:13.994365   48683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 08:47:14.008131   48683 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:47:14.012074   48683 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 08:47:14.012170   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:14.162349   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:14.970935   48683 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:47:14.971004   48683 certs.go:195] generating shared ca certs ...
	I1206 08:47:14.971035   48683 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:14.971212   48683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:47:14.971308   48683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:47:14.971340   48683 certs.go:257] generating profile certs ...
	I1206 08:47:14.971529   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:47:14.971755   48683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:47:14.971844   48683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:47:14.971869   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 08:47:14.971914   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 08:47:14.971945   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 08:47:14.971989   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 08:47:14.972021   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 08:47:14.972053   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 08:47:14.972085   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 08:47:14.972115   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 08:47:14.972198   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:47:14.972259   48683 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:47:14.972284   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:47:14.972342   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:47:14.972394   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:47:14.972452   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:47:14.972528   48683 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:47:14.972579   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:14.972619   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem -> /usr/share/ca-certificates/4292.pem
	I1206 08:47:14.972659   48683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> /usr/share/ca-certificates/42922.pem
	I1206 08:47:14.973224   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:47:14.995297   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:47:15.042161   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:47:15.062885   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:47:15.082018   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:47:15.101436   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:47:15.120061   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:47:15.140257   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:47:15.160107   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:47:15.178980   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:47:15.197893   48683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:47:15.216224   48683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:47:15.229330   48683 ssh_runner.go:195] Run: openssl version
	I1206 08:47:15.235331   48683 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 08:47:15.235817   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.243429   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:47:15.250764   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254643   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254673   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.254723   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:47:15.295906   48683 command_runner.go:130] > b5213941
	I1206 08:47:15.295990   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:47:15.303441   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.310784   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:47:15.318504   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322051   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322380   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.322461   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:47:15.363237   48683 command_runner.go:130] > 51391683
	I1206 08:47:15.363703   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:47:15.371299   48683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.378918   48683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:47:15.386367   48683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390281   48683 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390354   48683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.390410   48683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:47:15.431004   48683 command_runner.go:130] > 3ec20f2e
	I1206 08:47:15.431441   48683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:47:15.439072   48683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442819   48683 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:47:15.442856   48683 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 08:47:15.442863   48683 command_runner.go:130] > Device: 259,1	Inode: 1055659     Links: 1
	I1206 08:47:15.442870   48683 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 08:47:15.442877   48683 command_runner.go:130] > Access: 2025-12-06 08:43:07.824678266 +0000
	I1206 08:47:15.442882   48683 command_runner.go:130] > Modify: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442890   48683 command_runner.go:130] > Change: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442895   48683 command_runner.go:130] >  Birth: 2025-12-06 08:39:03.665220506 +0000
	I1206 08:47:15.442956   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 08:47:15.483144   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.483601   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 08:47:15.524376   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.524527   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 08:47:15.567333   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.567897   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 08:47:15.609722   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.610195   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 08:47:15.652939   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.653458   48683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 08:47:15.694815   48683 command_runner.go:130] > Certificate will not expire
	I1206 08:47:15.695278   48683 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:47:15.695370   48683 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:47:15.695465   48683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:47:15.724990   48683 cri.go:89] found id: ""
	I1206 08:47:15.725064   48683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:47:15.732181   48683 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 08:47:15.732210   48683 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 08:47:15.732217   48683 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 08:47:15.733102   48683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 08:47:15.733116   48683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 08:47:15.733169   48683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 08:47:15.740768   48683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:47:15.741168   48683 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-090986" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.741273   48683 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "functional-090986" cluster setting kubeconfig missing "functional-090986" context setting]
	I1206 08:47:15.741558   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.741975   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.742128   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.742650   48683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 08:47:15.742669   48683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 08:47:15.742675   48683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 08:47:15.742680   48683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 08:47:15.742685   48683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 08:47:15.742976   48683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 08:47:15.743070   48683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 08:47:15.750828   48683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 08:47:15.750861   48683 kubeadm.go:602] duration metric: took 17.739612ms to restartPrimaryControlPlane
	I1206 08:47:15.750871   48683 kubeadm.go:403] duration metric: took 55.600148ms to StartCluster
	I1206 08:47:15.750890   48683 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.750966   48683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.751639   48683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:47:15.751842   48683 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 08:47:15.752180   48683 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:47:15.752232   48683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 08:47:15.752302   48683 addons.go:70] Setting storage-provisioner=true in profile "functional-090986"
	I1206 08:47:15.752319   48683 addons.go:239] Setting addon storage-provisioner=true in "functional-090986"
	I1206 08:47:15.752322   48683 addons.go:70] Setting default-storageclass=true in profile "functional-090986"
	I1206 08:47:15.752340   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.752341   48683 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-090986"
	I1206 08:47:15.752637   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.752784   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.759188   48683 out.go:179] * Verifying Kubernetes components...
	I1206 08:47:15.762058   48683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:47:15.783651   48683 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:47:15.783826   48683 kapi.go:59] client config for functional-090986: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 08:47:15.785192   48683 addons.go:239] Setting addon default-storageclass=true in "functional-090986"
	I1206 08:47:15.785238   48683 host.go:66] Checking if "functional-090986" exists ...
	I1206 08:47:15.785700   48683 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:47:15.797451   48683 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 08:47:15.800625   48683 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:15.800648   48683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 08:47:15.800725   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.810048   48683 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:15.810080   48683 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 08:47:15.810147   48683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:47:15.824818   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.853374   48683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:47:15.963935   48683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:47:15.994167   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.016409   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:16.722308   48683 node_ready.go:35] waiting up to 6m0s for node "functional-090986" to be "Ready" ...
	I1206 08:47:16.722441   48683 type.go:168] "Request Body" body=""
	I1206 08:47:16.722509   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:16.722791   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:16.722979   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722997   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723021   48683 retry.go:31] will retry after 246.599259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.722932   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.723088   48683 retry.go:31] will retry after 155.728524ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.879530   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:16.938491   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:16.942697   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.942739   48683 retry.go:31] will retry after 198.095926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:16.969843   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.032387   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.037081   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.037167   48683 retry.go:31] will retry after 340.655262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.141488   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:17.200483   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.200581   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.200607   48683 retry.go:31] will retry after 823.921965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:17.378343   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:17.437909   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:17.437949   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.437997   48683 retry.go:31] will retry after 597.373907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:17.723431   48683 type.go:168] "Request Body" body=""
	I1206 08:47:17.723506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:17.723862   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.025532   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:18.036222   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:18.102548   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.106195   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.106289   48683 retry.go:31] will retry after 988.595122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128444   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:18.128537   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.128579   48683 retry.go:31] will retry after 1.22957213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:18.222734   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.222810   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.223190   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:18.722737   48683 type.go:168] "Request Body" body=""
	I1206 08:47:18.722827   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:18.723191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:18.723277   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:19.095767   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:19.151460   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.155168   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.155201   48683 retry.go:31] will retry after 1.717558752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.223503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.223595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.223937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:19.358372   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:19.411770   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:19.415269   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.415303   48683 retry.go:31] will retry after 781.287082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:19.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:47:19.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:19.722942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.197734   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:20.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.223196   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.223547   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:20.262283   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.262363   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.262407   48683 retry.go:31] will retry after 1.829414459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:47:20.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:20.723284   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:20.723338   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:20.873661   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:20.932799   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:20.936985   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:20.937020   48683 retry.go:31] will retry after 2.554499586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:21.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.223934   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:21.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:21.722674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:21.723048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.092657   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:22.149785   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:22.153326   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.153368   48683 retry.go:31] will retry after 2.084938041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:22.222743   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:22.722901   48683 type.go:168] "Request Body" body=""
	I1206 08:47:22.722987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:22.723330   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:22.723402   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:23.223196   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.223285   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.223660   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:23.492173   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:23.557652   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:23.557715   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.557741   48683 retry.go:31] will retry after 4.19827742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:23.723091   48683 type.go:168] "Request Body" body=""
	I1206 08:47:23.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:23.723482   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.223263   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.223339   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.223623   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:24.238906   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:24.307275   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:24.307320   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.307339   48683 retry.go:31] will retry after 4.494270685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:24.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:47:24.722877   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:24.723244   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:25.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.223006   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.223365   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:25.223455   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:25.723213   48683 type.go:168] "Request Body" body=""
	I1206 08:47:25.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:25.723596   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.223491   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.223588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:26.722621   48683 type.go:168] "Request Body" body=""
	I1206 08:47:26.722699   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:26.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.222525   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.222628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.222892   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:27.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:47:27.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:27.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:27.723035   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:27.756528   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:27.814954   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:27.818792   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:27.818824   48683 retry.go:31] will retry after 5.399057422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.223490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.223811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.723414   48683 type.go:168] "Request Body" body=""
	I1206 08:47:28.723485   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:28.723794   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:28.802108   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:28.864913   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:28.864953   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:28.864972   48683 retry.go:31] will retry after 3.285056528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:29.223479   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.223556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.223857   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:29.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:29.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:29.723030   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:29.723087   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:30.222650   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.222720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.223035   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:30.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:30.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.222982   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.223061   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:31.723202   48683 type.go:168] "Request Body" body=""
	I1206 08:47:31.723273   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:31.723614   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:31.723661   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:32.150291   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:32.207920   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:32.211781   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.211813   48683 retry.go:31] will retry after 10.805243336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:32.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.223541   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:32.723329   48683 type.go:168] "Request Body" body=""
	I1206 08:47:32.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:32.723744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.218182   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:33.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.222677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:33.295753   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:33.295946   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.295967   48683 retry.go:31] will retry after 9.227502372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:33.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:33.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:33.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:33.723973   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:34.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.222681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.223037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:34.723424   48683 type.go:168] "Request Body" body=""
	I1206 08:47:34.723499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:34.723811   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.222963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:35.722601   48683 type.go:168] "Request Body" body=""
	I1206 08:47:35.722678   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:35.723029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:36.223123   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.223195   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.223476   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:36.223516   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:36.723305   48683 type.go:168] "Request Body" body=""
	I1206 08:47:36.723388   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:36.723674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.223484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.223557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.223866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:37.723315   48683 type.go:168] "Request Body" body=""
	I1206 08:47:37.723395   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:37.723693   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:38.223481   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.223553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.223887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:38.223937   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:38.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:38.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:38.723024   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.223350   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.223435   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.223711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:39.723507   48683 type.go:168] "Request Body" body=""
	I1206 08:47:39.723587   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:39.723926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.222602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:40.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:47:40.723573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:40.723901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:40.723952   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:41.222532   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.222606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:41.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:41.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:41.723083   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.222810   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.222891   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.223201   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:42.523700   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:42.586651   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:42.586695   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.586713   48683 retry.go:31] will retry after 12.2898811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:42.723024   48683 type.go:168] "Request Body" body=""
	I1206 08:47:42.723100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:42.723445   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:43.017838   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:47:43.079371   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:43.079435   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.079458   48683 retry.go:31] will retry after 19.494910144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:43.222603   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:43.223199   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:43.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:47:43.722697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:43.722959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.222614   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:44.722637   48683 type.go:168] "Request Body" body=""
	I1206 08:47:44.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:44.723067   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:45.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:45.223228   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:45.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:47:45.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:45.722969   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.223003   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.223089   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.223469   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:46.723273   48683 type.go:168] "Request Body" body=""
	I1206 08:47:46.723345   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:46.723681   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:47.223099   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.223167   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.223496   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:47.223542   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:47.723308   48683 type.go:168] "Request Body" body=""
	I1206 08:47:47.723392   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:47.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.223454   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.223802   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:48.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:47:48.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:48.722998   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.222713   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:49.722484   48683 type.go:168] "Request Body" body=""
	I1206 08:47:49.722561   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:49.722823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:49.722870   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:50.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.222659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.222990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:50.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:47:50.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:50.723086   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.222858   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.222936   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:51.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:47:51.723311   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:51.723634   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:51.723682   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:52.223453   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.223522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.223869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:52.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:47:52.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:52.722897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.222985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:53.722688   48683 type.go:168] "Request Body" body=""
	I1206 08:47:53.722770   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:53.723108   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.222905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:54.222955   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:54.722588   48683 type.go:168] "Request Body" body=""
	I1206 08:47:54.722660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:54.723065   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:54.877464   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:47:54.933804   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:47:54.937955   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:54.937987   48683 retry.go:31] will retry after 17.91075527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:47:55.223442   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.223519   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.223852   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:55.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:47:55.722606   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:55.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:56.222999   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.223070   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:56.223487   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:56.723218   48683 type.go:168] "Request Body" body=""
	I1206 08:47:56.723287   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:56.723646   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.223125   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.223203   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.223494   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:57.722995   48683 type.go:168] "Request Body" body=""
	I1206 08:47:57.723069   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:57.723443   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:58.223117   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.223189   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.223566   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:47:58.223620   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:47:58.723372   48683 type.go:168] "Request Body" body=""
	I1206 08:47:58.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:58.723711   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.223465   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.223912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:47:59.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:47:59.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:47:59.722939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:00.247414   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.247503   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.247882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:00.247935   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:00.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:00.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:00.722938   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.222887   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.222999   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.223358   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:01.723162   48683 type.go:168] "Request Body" body=""
	I1206 08:48:01.723235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:01.723597   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.223412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.223493   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.223823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:02.575367   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:02.637904   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:02.637958   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.637977   48683 retry.go:31] will retry after 12.943468008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:02.723120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:02.723231   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:02.723512   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:02.723552   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:03.223325   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.223416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.223738   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:03.723412   48683 type.go:168] "Request Body" body=""
	I1206 08:48:03.723492   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:03.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.222479   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.222557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.222823   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:04.722559   48683 type.go:168] "Request Body" body=""
	I1206 08:48:04.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:04.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:05.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.222783   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:05.223222   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:48:05.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:05.722946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.223159   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.223264   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.223665   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:06.723461   48683 type.go:168] "Request Body" body=""
	I1206 08:48:06.723536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:06.723855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.222524   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.222592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:07.722594   48683 type.go:168] "Request Body" body=""
	I1206 08:48:07.722670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:07.723027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:07.723084   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:08.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.222686   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.223036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:08.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.222614   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:09.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:48:09.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:09.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:10.223441   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.223507   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.223798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:10.223853   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:10.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:10.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:10.723077   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.222928   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.223022   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.223407   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:11.723237   48683 type.go:168] "Request Body" body=""
	I1206 08:48:11.723308   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:11.723611   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.223404   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.223497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:12.223876   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:12.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:48:12.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:12.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:12.849275   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:12.904952   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:12.908634   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:12.908667   48683 retry.go:31] will retry after 25.236445918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:13.223053   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.223119   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.223405   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:13.723248   48683 type.go:168] "Request Body" body=""
	I1206 08:48:13.723328   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:13.723664   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:14.223478   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.223874   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:14.223925   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:14.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:48:14.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:14.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.222959   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:15.582577   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:15.646326   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:15.649856   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.649887   48683 retry.go:31] will retry after 20.09954841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 08:48:15.723221   48683 type.go:168] "Request Body" body=""
	I1206 08:48:15.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:15.723656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.222526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.222836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:16.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:48:16.723594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:16.723935   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:16.723996   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:17.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.222939   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:17.722495   48683 type.go:168] "Request Body" body=""
	I1206 08:48:17.722573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:17.722891   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:18.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:48:18.722663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:18.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:19.222693   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.222763   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:19.223069   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:19.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:19.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:19.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.223016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:20.723476   48683 type.go:168] "Request Body" body=""
	I1206 08:48:20.723548   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:20.723815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:21.222793   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.222863   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.223194   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:21.223251   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:21.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:48:21.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:21.722963   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.222954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:22.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:22.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:22.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.222759   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:23.722801   48683 type.go:168] "Request Body" body=""
	I1206 08:48:23.722870   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:23.723132   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:23.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:24.222563   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:24.722536   48683 type.go:168] "Request Body" body=""
	I1206 08:48:24.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:24.722956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.223202   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.223267   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:25.723347   48683 type.go:168] "Request Body" body=""
	I1206 08:48:25.723448   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:25.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:25.723881   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:26.222857   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.222930   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.223262   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:26.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:48:26.722570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:26.722886   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.222528   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.222598   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.222916   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:27.722616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:27.722695   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:27.723043   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:28.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.222622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.222922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:28.222981   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:28.722606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:28.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:28.723095   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:29.722674   48683 type.go:168] "Request Body" body=""
	I1206 08:48:29.722742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:29.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:30.222789   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.222864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.223189   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:30.223256   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:30.722578   48683 type.go:168] "Request Body" body=""
	I1206 08:48:30.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:30.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.223492   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.223567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:31.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:48:31.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:31.722991   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.222773   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.223092   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:32.722517   48683 type.go:168] "Request Body" body=""
	I1206 08:48:32.722582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:32.722842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:32.722882   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:33.222543   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:33.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:48:33.722591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:33.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.223311   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.223394   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.223656   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:34.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:48:34.723571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:34.723917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:34.723969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:35.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.222962   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.722532   48683 type.go:168] "Request Body" body=""
	I1206 08:48:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:35.722854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:35.750369   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:48:35.818338   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818385   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:35.818494   48683 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:36.223177   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.223245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.223588   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:36.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:48:36.723369   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:36.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:37.223358   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.223441   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.223795   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:37.223851   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:37.723459   48683 type.go:168] "Request Body" body=""
	I1206 08:48:37.723575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:37.723923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.145414   48683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:48:38.206093   48683 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210075   48683 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 08:48:38.210171   48683 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 08:48:38.213345   48683 out.go:179] * Enabled addons: 
	I1206 08:48:38.217127   48683 addons.go:530] duration metric: took 1m22.464883403s for enable addons: enabled=[]
	I1206 08:48:38.223238   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.223680   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:38.723466   48683 type.go:168] "Request Body" body=""
	I1206 08:48:38.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:38.723871   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.222501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:39.722607   48683 type.go:168] "Request Body" body=""
	I1206 08:48:39.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:39.723013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:39.723066   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:40.222676   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.222756   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:40.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:48:40.722649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:40.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.223120   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.223622   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:41.723403   48683 type.go:168] "Request Body" body=""
	I1206 08:48:41.723475   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:41.723817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:41.723873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:42.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:42.722684   48683 type.go:168] "Request Body" body=""
	I1206 08:48:42.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:42.723129   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.222817   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.222915   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.223184   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:43.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:48:43.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:43.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:44.222598   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.223013   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:44.223067   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:44.722714   48683 type.go:168] "Request Body" body=""
	I1206 08:48:44.722785   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:44.723069   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.222844   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.222932   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.223348   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:45.723174   48683 type.go:168] "Request Body" body=""
	I1206 08:48:45.723260   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:45.723605   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.222507   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.222918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:46.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:48:46.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:46.722952   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:46.723007   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:47.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:47.722496   48683 type.go:168] "Request Body" body=""
	I1206 08:48:47.722563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:47.722826   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.222616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.222974   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:48.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:48:48.722784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:48.723121   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:48.723172   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:49.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.222621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.222915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:49.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:48:49.722727   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:49.723082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.222645   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.222761   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.223073   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:50.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:48:50.722569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:50.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:51.222952   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.223025   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.223425   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:51.223480   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:51.723105   48683 type.go:168] "Request Body" body=""
	I1206 08:48:51.723185   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:51.723538   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.223323   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.223409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.223689   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:52.722451   48683 type.go:168] "Request Body" body=""
	I1206 08:48:52.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:52.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.222606   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.222684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.223017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:53.722735   48683 type.go:168] "Request Body" body=""
	I1206 08:48:53.722801   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:53.723122   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:53.723177   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:54.222846   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.222924   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.223260   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:54.722973   48683 type.go:168] "Request Body" body=""
	I1206 08:48:54.723056   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:54.723447   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.223281   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.223354   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.223701   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:55.723485   48683 type.go:168] "Request Body" body=""
	I1206 08:48:55.723577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:55.723911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:55.723962   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:56.222980   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.223059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.223408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:56.723182   48683 type.go:168] "Request Body" body=""
	I1206 08:48:56.723251   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:56.723637   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.223421   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.223498   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:57.722566   48683 type.go:168] "Request Body" body=""
	I1206 08:48:57.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:57.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:58.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.222866   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:48:58.222905   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:48:58.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:48:58.722681   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:58.723002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.222616   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.222687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:48:59.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:48:59.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:48:59.722925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:00.222639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.222712   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:00.223060   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:00.722635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:00.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:00.723063   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.223028   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.223234   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.223616   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:01.723323   48683 type.go:168] "Request Body" body=""
	I1206 08:49:01.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:01.723798   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:02.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.223571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:02.223997   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:02.722537   48683 type.go:168] "Request Body" body=""
	I1206 08:49:02.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:02.722919   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.222635   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.222942   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:03.722533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:03.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.222483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.222572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.222897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:04.722446   48683 type.go:168] "Request Body" body=""
	I1206 08:49:04.722517   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:04.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:04.722879   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:05.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.222673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:05.723297   48683 type.go:168] "Request Body" body=""
	I1206 08:49:05.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:05.723669   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.223552   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.223906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:06.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:06.722590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:06.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:06.722967   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:07.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.222610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.222868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:07.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:07.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:07.723006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.222578   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.222672   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:08.722492   48683 type.go:168] "Request Body" body=""
	I1206 08:49:08.722560   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:08.722911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:09.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.222979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:09.223046   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:09.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:09.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:09.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.222600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:10.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:10.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:10.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:11.222977   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.223048   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.224357   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1206 08:49:11.224412   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:11.722526   48683 type.go:168] "Request Body" body=""
	I1206 08:49:11.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:11.722867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.222693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.223079   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:12.722676   48683 type.go:168] "Request Body" body=""
	I1206 08:49:12.722753   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:12.723090   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.222608   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:13.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:49:13.722719   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:13.723062   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:13.723117   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:14.222784   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.222858   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:49:14.722588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:14.722847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.222870   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.222963   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.223324   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:15.722751   48683 type.go:168] "Request Body" body=""
	I1206 08:49:15.722830   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:15.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:15.723220   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:16.223389   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.223501   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.223841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:16.723482   48683 type.go:168] "Request Body" body=""
	I1206 08:49:16.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:16.723936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.222580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.222930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:17.722456   48683 type.go:168] "Request Body" body=""
	I1206 08:49:17.722525   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:17.722830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:18.222500   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.222575   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:18.222970   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:18.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:49:18.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:18.722957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.223415   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.223481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.223744   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:19.723518   48683 type.go:168] "Request Body" body=""
	I1206 08:49:19.723592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:19.723932   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:20.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.222604   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:20.223052   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:20.723464   48683 type.go:168] "Request Body" body=""
	I1206 08:49:20.723534   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:20.723877   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.222834   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.222916   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.223278   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:21.722585   48683 type.go:168] "Request Body" body=""
	I1206 08:49:21.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:21.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:22.222535   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:22.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:22.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:49:22.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:22.723051   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.222635   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.222710   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:23.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:49:23.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:23.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:24.222593   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.222679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.223059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:24.223115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:24.722807   48683 type.go:168] "Request Body" body=""
	I1206 08:49:24.722887   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:24.723288   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.223044   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.223114   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.223419   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:25.723206   48683 type.go:168] "Request Body" body=""
	I1206 08:49:25.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:25.723645   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.222468   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.222888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:26.722538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:26.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:26.722868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:26.722924   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:27.222542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.222966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:27.722668   48683 type.go:168] "Request Body" body=""
	I1206 08:49:27.722745   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:27.723116   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.222806   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.222880   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.223155   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:28.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:49:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:28.723088   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:28.723155   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:29.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.222755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.223135   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:29.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:29.722634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:29.722895   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.222645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.222996   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:30.722682   48683 type.go:168] "Request Body" body=""
	I1206 08:49:30.722768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:30.723166   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:30.723221   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:31.223009   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.223094   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.223410   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:31.723177   48683 type.go:168] "Request Body" body=""
	I1206 08:49:31.723280   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:31.723629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.223466   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.223936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:32.722617   48683 type.go:168] "Request Body" body=""
	I1206 08:49:32.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:32.722984   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:33.222572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.222977   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:33.223031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:33.722723   48683 type.go:168] "Request Body" body=""
	I1206 08:49:33.722796   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:33.723147   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.222719   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.222791   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.223074   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:34.722746   48683 type.go:168] "Request Body" body=""
	I1206 08:49:34.722818   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:34.723175   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:35.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.222977   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.223336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:35.223421   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:35.723153   48683 type.go:168] "Request Body" body=""
	I1206 08:49:35.723223   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:35.723599   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.223510   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.223602   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.223964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:36.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:49:36.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:36.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.222842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:37.722572   48683 type.go:168] "Request Body" body=""
	I1206 08:49:37.722645   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:37.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:37.723047   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:38.222686   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.223119   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:38.722610   48683 type.go:168] "Request Body" body=""
	I1206 08:49:38.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:38.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.222653   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.223084   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:39.722817   48683 type.go:168] "Request Body" body=""
	I1206 08:49:39.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:39.723225   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:39.723274   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:40.222564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.223023   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:40.722742   48683 type.go:168] "Request Body" body=""
	I1206 08:49:40.722820   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:40.723169   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.222970   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.223060   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.223424   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:41.723194   48683 type.go:168] "Request Body" body=""
	I1206 08:49:41.723270   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:41.723557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:41.723610   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:42.223426   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.223508   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.223855   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:42.722579   48683 type.go:168] "Request Body" body=""
	I1206 08:49:42.722655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:42.723008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.222519   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.222591   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.222864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:49:43.722618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:43.722917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:44.222612   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.223025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:44.223081   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:44.722483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:44.722565   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:44.722832   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.222605   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.223204   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:45.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:49:45.722651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:45.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:46.223140   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.223214   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.223549   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:46.223591   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:46.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:49:46.723406   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:46.723743   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.222537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:47.722575   48683 type.go:168] "Request Body" body=""
	I1206 08:49:47.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:47.722902   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.223027   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:48.722771   48683 type.go:168] "Request Body" body=""
	I1206 08:49:48.722869   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:48.723227   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:48.723289   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:49.222922   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.223256   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:49.723128   48683 type.go:168] "Request Body" body=""
	I1206 08:49:49.723204   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:49.723574   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.223491   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.223824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:50.722515   48683 type.go:168] "Request Body" body=""
	I1206 08:49:50.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:50.722856   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:51.223538   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.223610   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.223931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:51.223984   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:51.722501   48683 type.go:168] "Request Body" body=""
	I1206 08:49:51.722574   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:51.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.222457   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.222799   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:52.722542   48683 type.go:168] "Request Body" body=""
	I1206 08:49:52.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:52.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.222986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:53.723440   48683 type.go:168] "Request Body" body=""
	I1206 08:49:53.723514   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:53.723868   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:53.723922   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:54.222571   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.222646   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.222982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:54.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:49:54.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:54.723007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.222545   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.222641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.222936   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:55.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:49:55.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:55.723009   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:56.223162   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.223235   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.223592   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:56.223647   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:56.723346   48683 type.go:168] "Request Body" body=""
	I1206 08:49:56.723440   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:56.723715   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.223563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.224002   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:57.722697   48683 type.go:168] "Request Body" body=""
	I1206 08:49:57.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:57.723097   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.222803   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.222876   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.223156   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:58.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:49:58.722626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:58.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:49:58.723019   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:49:59.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:49:59.723484   48683 type.go:168] "Request Body" body=""
	I1206 08:49:59.723553   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:49:59.723878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.222685   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.222804   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.223133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:00.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:50:00.722691   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:00.723059   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:00.723115   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:01.222895   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.222993   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.223286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:01.722600   48683 type.go:168] "Request Body" body=""
	I1206 08:50:01.722682   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:01.723014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.223022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:02.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:50:02.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:02.722815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:03.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.222573   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.222946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:03.222994   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:03.722541   48683 type.go:168] "Request Body" body=""
	I1206 08:50:03.722640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:03.722983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.222608   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.222676   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:04.722602   48683 type.go:168] "Request Body" body=""
	I1206 08:50:04.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:04.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:05.222818   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.222895   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.223192   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:05.223237   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:05.722878   48683 type.go:168] "Request Body" body=""
	I1206 08:50:05.722947   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:05.723266   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.223357   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.223444   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.223770   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:06.722470   48683 type.go:168] "Request Body" body=""
	I1206 08:50:06.722567   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:06.722904   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.222610   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.222961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:07.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:50:07.722668   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:07.723032   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:07.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:08.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:08.722770   48683 type.go:168] "Request Body" body=""
	I1206 08:50:08.722843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:08.723145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.222599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.222860   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:09.722567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:09.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:09.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:10.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.222734   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.223056   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:10.223102   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:10.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:10.722688   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:10.723026   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.222874   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.222955   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.223305   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:11.723062   48683 type.go:168] "Request Body" body=""
	I1206 08:50:11.723127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:11.723408   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:12.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.223261   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.223620   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:12.223677   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:12.723275   48683 type.go:168] "Request Body" body=""
	I1206 08:50:12.723355   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:12.723713   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.223472   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.223538   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.223808   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:13.722511   48683 type.go:168] "Request Body" body=""
	I1206 08:50:13.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:13.722888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.222590   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.222669   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:14.722507   48683 type.go:168] "Request Body" body=""
	I1206 08:50:14.722580   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:14.722918   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:14.722969   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:15.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.222970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:15.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:15.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:15.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.223181   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.223255   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.223535   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:16.723326   48683 type.go:168] "Request Body" body=""
	I1206 08:50:16.723416   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:16.723757   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:16.723819   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:17.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.222577   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.222914   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:17.722474   48683 type.go:168] "Request Body" body=""
	I1206 08:50:17.722547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:17.722850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.222583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:18.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:50:18.722776   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:18.723111   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:19.222505   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.222594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.222859   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:19.222907   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:19.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:50:19.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:19.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.222684   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.222788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.223168   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:20.722431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:20.722497   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:20.722767   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:21.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.222714   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:21.223132   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:21.722822   48683 type.go:168] "Request Body" body=""
	I1206 08:50:21.722896   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:21.723237   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.222916   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.222997   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.223321   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:22.723124   48683 type.go:168] "Request Body" body=""
	I1206 08:50:22.723201   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:22.723551   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:23.223344   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.223446   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.223810   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:23.223863   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:23.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:50:23.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:23.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.222565   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:24.722582   48683 type.go:168] "Request Body" body=""
	I1206 08:50:24.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:24.723045   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.222675   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.222956   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:25.722490   48683 type.go:168] "Request Body" body=""
	I1206 08:50:25.722558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:25.722858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:25.722902   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:26.222992   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.223066   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.223429   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:26.723227   48683 type.go:168] "Request Body" body=""
	I1206 08:50:26.723293   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:26.723619   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.223499   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.223833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:27.722540   48683 type.go:168] "Request Body" body=""
	I1206 08:50:27.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:27.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:27.723024   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:28.222458   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.222528   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.222853   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:28.722553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:28.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:28.722950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.222978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:29.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:29.722755   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:29.723172   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:29.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:30.222914   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.222992   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:30.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:50:30.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:30.722926   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.222879   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.223214   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:31.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:50:31.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:31.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:32.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.222636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.222931   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:32.222979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:32.722487   48683 type.go:168] "Request Body" body=""
	I1206 08:50:32.722557   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:32.722887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.222576   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:33.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:33.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:33.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.222527   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.222618   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.222896   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:34.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:50:34.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:34.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:34.723033   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:35.222710   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.222784   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.223174   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:35.722627   48683 type.go:168] "Request Body" body=""
	I1206 08:50:35.722703   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:35.723010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.223126   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.223207   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.223553   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:36.723209   48683 type.go:168] "Request Body" body=""
	I1206 08:50:36.723279   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:36.723639   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:36.723696   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:37.223303   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.223402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.223672   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:37.723463   48683 type.go:168] "Request Body" body=""
	I1206 08:50:37.723537   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:37.723869   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.222461   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.222541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.222903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:38.723171   48683 type.go:168] "Request Body" body=""
	I1206 08:50:38.723241   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:38.723601   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:39.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.223483   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.223848   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:39.223901   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:39.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:39.722647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:39.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.222657   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.222993   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:40.722677   48683 type.go:168] "Request Body" body=""
	I1206 08:50:40.722746   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:40.723061   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.222889   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.222968   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.223319   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:41.722870   48683 type.go:168] "Request Body" body=""
	I1206 08:50:41.722996   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:41.723258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:41.723307   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:42.223105   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.223193   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.223674   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:42.723351   48683 type.go:168] "Request Body" body=""
	I1206 08:50:42.723454   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:42.723771   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.222466   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.222542   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.222830   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:43.723509   48683 type.go:168] "Request Body" body=""
	I1206 08:50:43.723588   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:43.723950   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:43.724004   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:44.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.222639   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.222958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:44.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:44.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:44.722910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.222798   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.223002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.223897   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:45.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:50:45.722648   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:45.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:46.224920   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.224987   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.225286   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:46.225327   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:46.723066   48683 type.go:168] "Request Body" body=""
	I1206 08:50:46.723140   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:46.723458   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.223236   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.223694   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:47.723477   48683 type.go:168] "Request Body" body=""
	I1206 08:50:47.723544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:47.723809   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.222493   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.222584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.222924   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:48.722583   48683 type.go:168] "Request Body" body=""
	I1206 08:50:48.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:48.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:48.723048   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:49.222689   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.222760   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.223029   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:49.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:50:49.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:49.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.222657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.223048   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:50.722508   48683 type.go:168] "Request Body" body=""
	I1206 08:50:50.722578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:50.722889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:51.222958   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.223044   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.223428   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:51.223484   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:51.723238   48683 type.go:168] "Request Body" body=""
	I1206 08:50:51.723326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:51.723667   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.223431   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.223506   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.223847   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:52.723473   48683 type.go:168] "Request Body" body=""
	I1206 08:50:52.723546   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:52.723905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.222927   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:53.723407   48683 type.go:168] "Request Body" body=""
	I1206 08:50:53.723477   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:53.723780   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:53.723831   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:54.223258   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.223334   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.223684   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:54.723481   48683 type.go:168] "Request Body" body=""
	I1206 08:50:54.723559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:54.723887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.222529   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.222605   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.222908   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:55.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:50:55.722656   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:55.722995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:56.223029   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.223100   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.223448   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:56.223504   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:56.723288   48683 type.go:168] "Request Body" body=""
	I1206 08:50:56.723362   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:56.723641   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.223425   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.223504   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.223865   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:57.722469   48683 type.go:168] "Request Body" body=""
	I1206 08:50:57.722544   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:57.722884   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.222638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.222923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:58.722609   48683 type.go:168] "Request Body" body=""
	I1206 08:50:58.722693   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:58.723034   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:50:58.723089   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:50:59.222617   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.222692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.223050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:50:59.722758   48683 type.go:168] "Request Body" body=""
	I1206 08:50:59.722838   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:50:59.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.222649   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.222741   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:00.722924   48683 type.go:168] "Request Body" body=""
	I1206 08:51:00.723002   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:00.723336   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:00.723407   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:01.223151   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.223227   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.223550   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:01.723316   48683 type.go:168] "Request Body" body=""
	I1206 08:51:01.723407   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:01.723750   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.222569   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.222910   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:02.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:51:02.722609   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:02.722882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:03.222585   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:03.223074   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:03.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:51:03.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:03.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.222840   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:04.722555   48683 type.go:168] "Request Body" body=""
	I1206 08:51:04.722628   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:04.722970   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:05.222698   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.222780   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.223093   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:05.223142   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:05.722471   48683 type.go:168] "Request Body" body=""
	I1206 08:51:05.722549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:05.722864   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.223041   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.223120   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.223579   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:06.723396   48683 type.go:168] "Request Body" body=""
	I1206 08:51:06.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:06.723824   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.222502   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.222893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:07.722605   48683 type.go:168] "Request Body" body=""
	I1206 08:51:07.722673   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:07.723011   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:07.723085   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:08.222754   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.222842   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.223191   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:08.722662   48683 type.go:168] "Request Body" body=""
	I1206 08:51:08.722736   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:08.723038   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.222745   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.223142   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:09.722861   48683 type.go:168] "Request Body" body=""
	I1206 08:51:09.722941   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:09.723235   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:09.723279   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:10.222634   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.222706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:10.722568   48683 type.go:168] "Request Body" body=""
	I1206 08:51:10.722638   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:10.722937   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.223528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.223600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.223913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:11.723112   48683 type.go:168] "Request Body" body=""
	I1206 08:51:11.723177   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:11.723461   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:11.723503   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:12.223246   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.223319   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.223682   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:12.723520   48683 type.go:168] "Request Body" body=""
	I1206 08:51:12.723593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:12.723946   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.222536   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:13.722577   48683 type.go:168] "Request Body" body=""
	I1206 08:51:13.722658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:13.722958   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:14.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.222989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:14.223043   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:14.722452   48683 type.go:168] "Request Body" body=""
	I1206 08:51:14.722533   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:14.722845   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.222537   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.222613   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:15.722573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:15.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:15.723240   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:16.222683   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.222764   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:16.223086   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:16.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:16.722677   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:16.723021   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:17.722565   48683 type.go:168] "Request Body" body=""
	I1206 08:51:17.722636   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:17.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.222651   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:18.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:18.722665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:18.723004   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:18.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:19.222506   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.222578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.222917   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:19.722545   48683 type.go:168] "Request Body" body=""
	I1206 08:51:19.722616   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:19.722960   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.222654   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.222725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.223047   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:20.722711   48683 type.go:168] "Request Body" body=""
	I1206 08:51:20.722782   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:20.723050   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:20.723099   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:21.223085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.223158   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.223561   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:21.723342   48683 type.go:168] "Request Body" body=""
	I1206 08:51:21.723426   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:21.723759   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.223473   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.223543   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:22.722650   48683 type.go:168] "Request Body" body=""
	I1206 08:51:22.722720   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:22.723089   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:22.723144   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:23.222822   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.222899   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.223255   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:23.722516   48683 type.go:168] "Request Body" body=""
	I1206 08:51:23.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:23.722930   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.222655   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.222728   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:24.722672   48683 type.go:168] "Request Body" body=""
	I1206 08:51:24.722766   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:24.723136   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:24.723192   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:25.222510   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.222581   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.222889   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:25.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:25.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:25.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.223082   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.223153   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.223523   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:26.723172   48683 type.go:168] "Request Body" body=""
	I1206 08:51:26.723245   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:26.723542   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:26.723585   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:27.223401   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.223474   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.223854   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:27.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:51:27.722624   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:27.722945   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.223483   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.223564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.223873   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:28.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:28.722696   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:28.723057   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:29.222647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.222739   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.223145   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:29.223197   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:29.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:51:29.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:29.722968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.222659   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.222740   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.223109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:30.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:30.722659   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:30.723015   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:31.222877   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.222948   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.223216   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:31.223257   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:31.722581   48683 type.go:168] "Request Body" body=""
	I1206 08:51:31.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:31.722986   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.222702   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.222778   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:32.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:51:32.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:32.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.222600   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.222731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:33.722763   48683 type.go:168] "Request Body" body=""
	I1206 08:51:33.722837   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:33.723186   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:33.723243   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:34.223469   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.223541   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.223815   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:34.722512   48683 type.go:168] "Request Body" body=""
	I1206 08:51:34.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:34.722905   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.222597   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.222685   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.223031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:35.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:35.722600   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:35.722870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:36.223103   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.223184   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.223557   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:36.223614   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:36.723236   48683 type.go:168] "Request Body" body=""
	I1206 08:51:36.723314   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:36.723677   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.223456   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.223536   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.223814   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:37.722521   48683 type.go:168] "Request Body" body=""
	I1206 08:51:37.722595   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:37.722941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.222667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.222743   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:38.722867   48683 type.go:168] "Request Body" body=""
	I1206 08:51:38.722943   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:38.723253   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:38.723310   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:39.222567   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.222649   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:39.722686   48683 type.go:168] "Request Body" body=""
	I1206 08:51:39.722767   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:39.723127   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.222805   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.222893   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.223247   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:40.722586   48683 type.go:168] "Request Body" body=""
	I1206 08:51:40.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:41.223068   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.223147   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.223511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:41.223567   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:41.723311   48683 type.go:168] "Request Body" body=""
	I1206 08:51:41.723402   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:41.723663   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.223489   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.223566   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.223933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:42.722618   48683 type.go:168] "Request Body" body=""
	I1206 08:51:42.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:42.723031   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.222740   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.222816   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.223098   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:43.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:51:43.722622   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:43.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:43.723044   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:44.222550   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:44.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:51:44.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:44.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.222681   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.222768   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.223254   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:45.723085   48683 type.go:168] "Request Body" body=""
	I1206 08:51:45.723156   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:45.723536   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:45.723592   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:46.223392   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.223709   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:46.722472   48683 type.go:168] "Request Body" body=""
	I1206 08:51:46.722550   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:46.722893   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.222580   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.223014   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:47.722500   48683 type.go:168] "Request Body" body=""
	I1206 08:51:47.722572   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:47.722920   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:48.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.222647   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.222994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:48.223050   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:48.722729   48683 type.go:168] "Request Body" body=""
	I1206 08:51:48.722814   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:48.723224   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.222495   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.222841   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:49.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:51:49.722625   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:49.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.222560   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.222640   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.222975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:50.722647   48683 type.go:168] "Request Body" body=""
	I1206 08:51:50.722725   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:50.723039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:50.723088   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:51.222890   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.222961   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.223302   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:51.723095   48683 type.go:168] "Request Body" body=""
	I1206 08:51:51.723166   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:51.723527   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.223293   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.223365   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.223638   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:52.723480   48683 type.go:168] "Request Body" body=""
	I1206 08:51:52.723556   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:52.723872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:52.723957   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:53.222573   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.222971   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:53.722667   48683 type.go:168] "Request Body" body=""
	I1206 08:51:53.722737   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:53.723003   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.222637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.222983   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:54.722549   48683 type.go:168] "Request Body" body=""
	I1206 08:51:54.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:54.722987   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:55.223524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.223593   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.223922   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:55.223979   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:55.722631   48683 type.go:168] "Request Body" body=""
	I1206 08:51:55.722706   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:55.723040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.223219   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.223289   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.223644   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:56.723321   48683 type.go:168] "Request Body" body=""
	I1206 08:51:56.723409   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:56.723712   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.223501   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.223578   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.223899   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:57.722570   48683 type.go:168] "Request Body" body=""
	I1206 08:51:57.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:57.722944   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:57.722991   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:51:58.222513   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.222843   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:58.722524   48683 type.go:168] "Request Body" body=""
	I1206 08:51:58.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:58.722929   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.222533   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:51:59.722619   48683 type.go:168] "Request Body" body=""
	I1206 08:51:59.722692   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:51:59.723017   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:51:59.723091   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:00.222670   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.223085   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:00.722589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:00.722664   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:00.722961   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.222893   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.222975   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.223252   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:01.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:52:01.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:01.722982   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:02.222554   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.222634   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.222965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:02.223025   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:02.722664   48683 type.go:168] "Request Body" body=""
	I1206 08:52:02.722731   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:02.722994   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.222669   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.223082   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:03.722639   48683 type.go:168] "Request Body" body=""
	I1206 08:52:03.722717   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:03.723036   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.222516   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.222582   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.222867   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:04.722564   48683 type.go:168] "Request Body" body=""
	I1206 08:52:04.722657   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:04.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:04.723061   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:05.222582   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.222660   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.223001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:05.722457   48683 type.go:168] "Request Body" body=""
	I1206 08:52:05.722529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:05.722796   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.223039   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.223118   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.223488   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:06.723240   48683 type.go:168] "Request Body" body=""
	I1206 08:52:06.723313   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:06.723661   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:06.723717   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:07.223477   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.223559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.223842   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:07.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:52:07.722632   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:07.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.222574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.222667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.223018   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:08.722509   48683 type.go:168] "Request Body" body=""
	I1206 08:52:08.722579   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:08.722903   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:09.222561   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.222633   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.222980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:09.223037   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:09.722702   48683 type.go:168] "Request Body" body=""
	I1206 08:52:09.722790   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:09.723150   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.222456   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.222522   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:10.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:10.722612   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:10.722985   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:11.222767   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.222843   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.223181   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:11.223242   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:11.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:11.722584   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:11.722907   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.222589   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.222662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.223039   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:12.722612   48683 type.go:168] "Request Body" body=""
	I1206 08:52:12.722687   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:12.723066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.222774   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.222844   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.223128   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:13.722797   48683 type.go:168] "Request Body" body=""
	I1206 08:52:13.722874   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:13.723220   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:13.723278   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:14.222938   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.223011   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.223370   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:14.723151   48683 type.go:168] "Request Body" body=""
	I1206 08:52:14.723218   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:14.723511   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.223282   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.223353   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.223716   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:15.723508   48683 type.go:168] "Request Body" body=""
	I1206 08:52:15.723596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:15.723933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:15.723988   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:16.223075   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.223148   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.223467   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:16.723393   48683 type.go:168] "Request Body" body=""
	I1206 08:52:16.723470   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:16.723870   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.222594   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.222670   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.222997   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:17.722535   48683 type.go:168] "Request Body" body=""
	I1206 08:52:17.722611   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:17.722894   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:18.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.222665   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.223008   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:18.223068   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:18.722580   48683 type.go:168] "Request Body" body=""
	I1206 08:52:18.722662   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:18.722990   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.222512   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.222583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.222898   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:19.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:19.722641   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:19.722979   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.222652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.222995   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:20.722493   48683 type.go:168] "Request Body" body=""
	I1206 08:52:20.722564   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:20.722881   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:20.722931   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:21.222827   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.222898   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.223270   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:21.722982   48683 type.go:168] "Request Body" body=""
	I1206 08:52:21.723059   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:21.723422   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.223208   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.223282   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.223570   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:22.723366   48683 type.go:168] "Request Body" body=""
	I1206 08:52:22.723481   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:22.723885   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:22.723946   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:23.222491   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.222570   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.222913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:23.722590   48683 type.go:168] "Request Body" body=""
	I1206 08:52:23.722661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:23.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.222650   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.223028   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:24.722593   48683 type.go:168] "Request Body" body=""
	I1206 08:52:24.722671   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:24.723025   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:25.222591   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.222988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:25.223036   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:25.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:25.722630   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:25.722980   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.223051   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.223127   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.223495   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:26.723130   48683 type.go:168] "Request Body" body=""
	I1206 08:52:26.723210   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:26.723481   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:27.223253   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.223326   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.223710   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:27.223764   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:27.723405   48683 type.go:168] "Request Body" body=""
	I1206 08:52:27.723490   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:27.723850   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.222551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.222878   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:28.722558   48683 type.go:168] "Request Body" body=""
	I1206 08:52:28.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:28.722973   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.222678   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.222749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.223068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:29.722528   48683 type.go:168] "Request Body" body=""
	I1206 08:52:29.722594   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:29.722851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:29.722889   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:30.222619   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.222697   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.223066   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:30.722574   48683 type.go:168] "Request Body" body=""
	I1206 08:52:30.722654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:30.722978   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.222962   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.223058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.223464   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:31.723093   48683 type.go:168] "Request Body" body=""
	I1206 08:52:31.723169   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:31.723529   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:31.723588   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:32.223231   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.223306   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.223675   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:32.723451   48683 type.go:168] "Request Body" body=""
	I1206 08:52:32.723529   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:32.723844   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.222539   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.222619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.222968   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:33.722693   48683 type.go:168] "Request Body" body=""
	I1206 08:52:33.722771   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:33.723118   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:34.222504   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.222571   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.222833   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:34.222873   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:34.722546   48683 type.go:168] "Request Body" body=""
	I1206 08:52:34.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:34.723016   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.222749   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.222823   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.223165   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:35.722851   48683 type.go:168] "Request Body" body=""
	I1206 08:52:35.722928   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:35.723193   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:36.223352   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.223456   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.223828   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:36.223884   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:36.722547   48683 type.go:168] "Request Body" body=""
	I1206 08:52:36.722620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:36.722964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.222641   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.222713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.223007   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:37.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:37.722652   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:37.722999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.222708   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.222795   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.223113   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:38.722765   48683 type.go:168] "Request Body" body=""
	I1206 08:52:38.722845   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:38.723210   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:38.723261   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:39.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.222663   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.223000   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:39.722551   48683 type.go:168] "Request Body" body=""
	I1206 08:52:39.722627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:39.722951   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.222518   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.222911   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:40.722636   48683 type.go:168] "Request Body" body=""
	I1206 08:52:40.722713   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:40.723068   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:41.222845   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.222923   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.223258   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:41.223312   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:41.722994   48683 type.go:168] "Request Body" body=""
	I1206 08:52:41.723058   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:41.723414   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.223248   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.223346   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.223858   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:42.722455   48683 type.go:168] "Request Body" body=""
	I1206 08:52:42.722526   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:42.722872   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:43.223420   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.223489   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.223805   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:43.223855   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:43.722522   48683 type.go:168] "Request Body" body=""
	I1206 08:52:43.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:43.722966   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.222553   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.222629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.222964   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:44.722543   48683 type.go:168] "Request Body" body=""
	I1206 08:52:44.722637   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:44.722989   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:45.222930   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.223076   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.223835   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:45.223976   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:45.722584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:45.722679   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:45.723037   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.223060   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.223131   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.223436   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:46.723272   48683 type.go:168] "Request Body" body=""
	I1206 08:52:46.723352   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:46.723748   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.222452   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.222527   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.222887   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:47.722563   48683 type.go:168] "Request Body" body=""
	I1206 08:52:47.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:47.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:47.722953   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:48.222579   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.222999   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:48.722576   48683 type.go:168] "Request Body" body=""
	I1206 08:52:48.722667   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:48.723001   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.222559   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.222626   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.222906   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:49.722569   48683 type.go:168] "Request Body" body=""
	I1206 08:52:49.722642   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:49.722975   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:49.723031   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:50.222588   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.222661   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.223020   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:50.723341   48683 type.go:168] "Request Body" body=""
	I1206 08:52:50.723423   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:50.723685   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.223482   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.223558   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.223901   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:51.722502   48683 type.go:168] "Request Body" body=""
	I1206 08:52:51.722583   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:51.722933   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:52.222677   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.222742   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:52.223122   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:52.722795   48683 type.go:168] "Request Body" body=""
	I1206 08:52:52.722878   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:52.723205   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.222957   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:53.722552   48683 type.go:168] "Request Body" body=""
	I1206 08:52:53.722621   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:53.722913   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.222577   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.222654   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.223005   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:54.722713   48683 type.go:168] "Request Body" body=""
	I1206 08:52:54.722788   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:54.723170   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:54.723229   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:55.222503   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.222603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.222912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:55.722613   48683 type.go:168] "Request Body" body=""
	I1206 08:52:55.722684   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:55.723022   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.223191   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.223262   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.223629   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:56.723299   48683 type.go:168] "Request Body" body=""
	I1206 08:52:56.723438   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:56.723703   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:56.723746   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:57.222463   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.222559   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.222925   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:57.722623   48683 type.go:168] "Request Body" body=""
	I1206 08:52:57.722694   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:57.723053   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.222555   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.222627   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.222882   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:58.722550   48683 type.go:168] "Request Body" body=""
	I1206 08:52:58.722619   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:58.722923   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:52:59.222596   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.222674   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.223010   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:52:59.223071   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:52:59.722703   48683 type.go:168] "Request Body" body=""
	I1206 08:52:59.722774   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:52:59.723041   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.222680   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.222765   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.223070   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:00.722902   48683 type.go:168] "Request Body" body=""
	I1206 08:53:00.722974   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:00.723300   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:01.223304   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.223397   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.223655   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:01.223703   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:01.723494   48683 type.go:168] "Request Body" body=""
	I1206 08:53:01.723563   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:01.723888   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.222575   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.222658   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.223040   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:02.722720   48683 type.go:168] "Request Body" body=""
	I1206 08:53:02.722789   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:02.723094   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.222570   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.222643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.223006   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:03.722718   48683 type.go:168] "Request Body" body=""
	I1206 08:53:03.722800   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:03.723133   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:03.723188   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:04.222478   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.222547   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.222820   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:04.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:04.722592   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:04.722965   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.222540   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.222620   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.222941   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:05.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:05.722596   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:05.722915   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:06.223065   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.223136   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.223522   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:06.223575   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:06.723193   48683 type.go:168] "Request Body" body=""
	I1206 08:53:06.723275   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:06.723670   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.223474   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.223549   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.223817   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:07.722518   48683 type.go:168] "Request Body" body=""
	I1206 08:53:07.722603   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:07.722954   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.222651   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.222735   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.223112   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:08.722793   48683 type.go:168] "Request Body" body=""
	I1206 08:53:08.722864   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:08.723164   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:08.723216   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:09.222584   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.222655   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.222992   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:09.722671   48683 type.go:168] "Request Body" body=""
	I1206 08:53:09.722749   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:09.723103   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.222760   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.222832   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.223102   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:10.722556   48683 type.go:168] "Request Body" body=""
	I1206 08:53:10.722631   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:10.722988   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:11.222770   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.222841   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.223177   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:11.223230   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:11.723479   48683 type.go:168] "Request Body" body=""
	I1206 08:53:11.723562   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:11.723836   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.222547   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.222623   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.222981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:12.722692   48683 type.go:168] "Request Body" body=""
	I1206 08:53:12.722772   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:12.723109   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.222517   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.222590   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.222851   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:13.722527   48683 type.go:168] "Request Body" body=""
	I1206 08:53:13.722599   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:13.722955   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:13.723015   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:14.222726   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.222802   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.223149   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:14.722562   48683 type.go:168] "Request Body" body=""
	I1206 08:53:14.722629   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:14.722912   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.222538   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.222617   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.222967   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:15.722571   48683 type.go:168] "Request Body" body=""
	I1206 08:53:15.722643   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:15.722981   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 08:53:16.222929   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.223005   48683 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-090986" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 08:53:16.223275   48683 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 08:53:16.223314   48683 node_ready.go:55] error getting node "functional-090986" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-090986": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 08:53:16.723228   48683 type.go:168] "Request Body" body=""
	I1206 08:53:16.723311   48683 node_ready.go:38] duration metric: took 6m0.000967258s for node "functional-090986" to be "Ready" ...
	I1206 08:53:16.726672   48683 out.go:203] 
	W1206 08:53:16.729718   48683 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 08:53:16.729749   48683 out.go:285] * 
	W1206 08:53:16.732326   48683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 08:53:16.735459   48683 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:53:24 functional-090986 containerd[5266]: time="2025-12-06T08:53:24.485393663Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:25 functional-090986 containerd[5266]: time="2025-12-06T08:53:25.558263180Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 06 08:53:25 functional-090986 containerd[5266]: time="2025-12-06T08:53:25.560376858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 06 08:53:25 functional-090986 containerd[5266]: time="2025-12-06T08:53:25.567036338Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:25 functional-090986 containerd[5266]: time="2025-12-06T08:53:25.567440259Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:26 functional-090986 containerd[5266]: time="2025-12-06T08:53:26.529633151Z" level=info msg="No images store for sha256:864d91b111549b1a614e1c2b69622472824686140966b61b9bf5ed9bf10b7a66"
	Dec 06 08:53:26 functional-090986 containerd[5266]: time="2025-12-06T08:53:26.531958316Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-090986\""
	Dec 06 08:53:26 functional-090986 containerd[5266]: time="2025-12-06T08:53:26.541003148Z" level=info msg="ImageCreate event name:\"sha256:5294eb1309299d240981eee230965d3e70b3f5d29d3eca33acb510d478dc7d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:26 functional-090986 containerd[5266]: time="2025-12-06T08:53:26.541434097Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:27 functional-090986 containerd[5266]: time="2025-12-06T08:53:27.338817563Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 06 08:53:27 functional-090986 containerd[5266]: time="2025-12-06T08:53:27.341282237Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 06 08:53:27 functional-090986 containerd[5266]: time="2025-12-06T08:53:27.343553435Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 06 08:53:27 functional-090986 containerd[5266]: time="2025-12-06T08:53:27.356083604Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.286132725Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.288633436Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.291637896Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.298280775Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.471801260Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.473941105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.482138732Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.482655683Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.600988204Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.603184650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.614679868Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 08:53:28 functional-090986 containerd[5266]: time="2025-12-06T08:53:28.615305433Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:53:32.774165    9386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:32.775203    9386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:32.776982    9386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:32.777671    9386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:53:32.780058    9386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> kernel <==
	 08:53:32 up 36 min,  0 user,  load average: 0.52, 0.32, 0.55
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 08:53:29 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:30 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 06 08:53:30 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:30 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:30 functional-090986 kubelet[9255]: E1206 08:53:30.540581    9255 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:30 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:30 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:31 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 06 08:53:31 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:31 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:31 functional-090986 kubelet[9264]: E1206 08:53:31.304164    9264 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:31 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:31 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:31 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 06 08:53:31 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:31 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:32 functional-090986 kubelet[9299]: E1206 08:53:32.043889    9299 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:32 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:32 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 08:53:32 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 06 08:53:32 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:32 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 08:53:32 functional-090986 kubelet[9391]: E1206 08:53:32.776788    9391 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 08:53:32 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 08:53:32 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (419.203142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-090986 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 08:56:36.062123    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:57:57.331334    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:59:20.396023    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:01:36.065670    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:02:57.331524    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-090986 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m13.594963096s)

                                                
                                                
-- stdout --
	* [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00139193s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-090986 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m13.596232624s for "functional-090986" cluster.
I1206 09:05:47.425007    4292 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (349.453588ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-181746 image ls --format json --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format table --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls                                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ delete         │ -p functional-181746                                                                                                                                    │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ start          │ -p functional-090986 --alsologtostderr -v=8                                                                                                             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:47 UTC │                     │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:latest                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add minikube-local-cache-test:functional-090986                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache delete minikube-local-cache-test:functional-090986                                                                              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl images                                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	│ cache          │ functional-090986 cache reload                                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ kubectl        │ functional-090986 kubectl -- --context functional-090986 get pods                                                                                       │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	│ start          │ -p functional-090986 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:53:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:53:33.876279   54452 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:53:33.876426   54452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:53:33.876430   54452 out.go:374] Setting ErrFile to fd 2...
	I1206 08:53:33.876434   54452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:53:33.876677   54452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:53:33.877013   54452 out.go:368] Setting JSON to false
	I1206 08:53:33.877825   54452 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2165,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:53:33.877882   54452 start.go:143] virtualization:  
	I1206 08:53:33.881239   54452 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:53:33.885112   54452 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:53:33.885177   54452 notify.go:221] Checking for updates...
	I1206 08:53:33.891576   54452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:53:33.894372   54452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:53:33.897142   54452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:53:33.900076   54452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:53:33.902894   54452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:53:33.906249   54452 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:53:33.906348   54452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:53:33.928682   54452 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:53:33.928770   54452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:53:33.993741   54452 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 08:53:33.983085793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:53:33.993843   54452 docker.go:319] overlay module found
	I1206 08:53:33.999105   54452 out.go:179] * Using the docker driver based on existing profile
	I1206 08:53:34.002148   54452 start.go:309] selected driver: docker
	I1206 08:53:34.002159   54452 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:34.002241   54452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:53:34.002360   54452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:53:34.059754   54452 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 08:53:34.048620994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:53:34.060212   54452 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:53:34.060235   54452 cni.go:84] Creating CNI manager for ""
	I1206 08:53:34.060282   54452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:53:34.060330   54452 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:34.065569   54452 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:53:34.068398   54452 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:53:34.071322   54452 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:53:34.074275   54452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:53:34.074316   54452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:53:34.074325   54452 cache.go:65] Caching tarball of preloaded images
	I1206 08:53:34.074364   54452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:53:34.074457   54452 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:53:34.074467   54452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:53:34.074577   54452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:53:34.094292   54452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:53:34.094303   54452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:53:34.094322   54452 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:53:34.094352   54452 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:53:34.094428   54452 start.go:364] duration metric: took 60.843µs to acquireMachinesLock for "functional-090986"
	I1206 08:53:34.094446   54452 start.go:96] Skipping create...Using existing machine configuration
	I1206 08:53:34.094451   54452 fix.go:54] fixHost starting: 
	I1206 08:53:34.094714   54452 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:53:34.110952   54452 fix.go:112] recreateIfNeeded on functional-090986: state=Running err=<nil>
	W1206 08:53:34.110973   54452 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 08:53:34.114350   54452 out.go:252] * Updating the running docker "functional-090986" container ...
	I1206 08:53:34.114380   54452 machine.go:94] provisionDockerMachine start ...
	I1206 08:53:34.114470   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.132110   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.132436   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.132441   54452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:53:34.290732   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:53:34.290745   54452 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:53:34.290806   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.309786   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.310075   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.310083   54452 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:53:34.468771   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:53:34.468838   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.492421   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.492726   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.492743   54452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:53:34.643743   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:53:34.643757   54452 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:53:34.643785   54452 ubuntu.go:190] setting up certificates
	I1206 08:53:34.643793   54452 provision.go:84] configureAuth start
	I1206 08:53:34.643849   54452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:53:34.661031   54452 provision.go:143] copyHostCerts
	I1206 08:53:34.661090   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:53:34.661103   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:53:34.661173   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:53:34.661279   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:53:34.661283   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:53:34.661307   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:53:34.661364   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:53:34.661367   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:53:34.661387   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:53:34.661440   54452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:53:35.261601   54452 provision.go:177] copyRemoteCerts
	I1206 08:53:35.261659   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:53:35.261707   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.278502   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.383098   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:53:35.400343   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:53:35.417458   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 08:53:35.434271   54452 provision.go:87] duration metric: took 790.45575ms to configureAuth
	I1206 08:53:35.434289   54452 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:53:35.434485   54452 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:53:35.434491   54452 machine.go:97] duration metric: took 1.320106202s to provisionDockerMachine
	I1206 08:53:35.434498   54452 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:53:35.434507   54452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:53:35.434552   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:53:35.434601   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.452073   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.559110   54452 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:53:35.562282   54452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:53:35.562301   54452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:53:35.562313   54452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:53:35.562372   54452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:53:35.562453   54452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:53:35.562529   54452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:53:35.562578   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:53:35.569704   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:53:35.586692   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:53:35.603733   54452 start.go:296] duration metric: took 169.221467ms for postStartSetup
	I1206 08:53:35.603809   54452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:53:35.603847   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.620625   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.725607   54452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:53:35.730716   54452 fix.go:56] duration metric: took 1.636258463s for fixHost
	I1206 08:53:35.730732   54452 start.go:83] releasing machines lock for "functional-090986", held for 1.636296668s
	I1206 08:53:35.730797   54452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:53:35.748170   54452 ssh_runner.go:195] Run: cat /version.json
	I1206 08:53:35.748211   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.748450   54452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:53:35.748491   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.780618   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.788438   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.895097   54452 ssh_runner.go:195] Run: systemctl --version
	I1206 08:53:35.994868   54452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 08:53:36.000428   54452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:53:36.000495   54452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:53:36.008950   54452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 08:53:36.008964   54452 start.go:496] detecting cgroup driver to use...
	I1206 08:53:36.008997   54452 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:53:36.009046   54452 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:53:36.024586   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:53:36.037573   54452 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:53:36.037628   54452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:53:36.053442   54452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:53:36.066493   54452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:53:36.187062   54452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:53:36.308311   54452 docker.go:234] disabling docker service ...
	I1206 08:53:36.308366   54452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:53:36.324390   54452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:53:36.337942   54452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:53:36.464363   54452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:53:36.601173   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:53:36.614787   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:53:36.630199   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:53:36.639943   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:53:36.649262   54452 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:53:36.649336   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:53:36.657952   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:53:36.666666   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:53:36.675637   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:53:36.684412   54452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:53:36.692740   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:53:36.701838   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:53:36.712344   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:53:36.721508   54452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:53:36.729269   54452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:53:36.736851   54452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:53:36.864978   54452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:53:37.021054   54452 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:53:37.021112   54452 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:53:37.025377   54452 start.go:564] Will wait 60s for crictl version
	I1206 08:53:37.025433   54452 ssh_runner.go:195] Run: which crictl
	I1206 08:53:37.029231   54452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:53:37.053402   54452 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:53:37.053462   54452 ssh_runner.go:195] Run: containerd --version
	I1206 08:53:37.077672   54452 ssh_runner.go:195] Run: containerd --version
	I1206 08:53:37.104087   54452 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:53:37.107051   54452 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:53:37.126470   54452 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:53:37.133471   54452 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 08:53:37.136362   54452 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:53:37.136495   54452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:53:37.136575   54452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:53:37.161065   54452 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:53:37.161078   54452 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:53:37.161139   54452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:53:37.189850   54452 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:53:37.189861   54452 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:53:37.189866   54452 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:53:37.189968   54452 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:53:37.190042   54452 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:53:37.215125   54452 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 08:53:37.215146   54452 cni.go:84] Creating CNI manager for ""
	I1206 08:53:37.215156   54452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:53:37.215169   54452 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:53:37.215191   54452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:53:37.215303   54452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:53:37.215394   54452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:53:37.223611   54452 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:53:37.223674   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:53:37.231742   54452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:53:37.245618   54452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:53:37.258873   54452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1206 08:53:37.272656   54452 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:53:37.277122   54452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:53:37.404546   54452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:53:38.220934   54452 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:53:38.220945   54452 certs.go:195] generating shared ca certs ...
	I1206 08:53:38.220959   54452 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:53:38.221099   54452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:53:38.221148   54452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:53:38.221154   54452 certs.go:257] generating profile certs ...
	I1206 08:53:38.221235   54452 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:53:38.221287   54452 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:53:38.221325   54452 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:53:38.221433   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:53:38.221466   54452 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:53:38.221473   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:53:38.221504   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:53:38.221527   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:53:38.221551   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:53:38.221601   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:53:38.222193   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:53:38.247995   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:53:38.268014   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:53:38.289184   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:53:38.308825   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:53:38.326629   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:53:38.344198   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:53:38.361819   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:53:38.379442   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:53:38.397025   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:53:38.414583   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:53:38.432182   54452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:53:38.444938   54452 ssh_runner.go:195] Run: openssl version
	I1206 08:53:38.451220   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.458796   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:53:38.466335   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.470195   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.470251   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.511660   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:53:38.520107   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.527562   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:53:38.535252   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.539202   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.539257   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.580913   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:53:38.589267   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.596722   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:53:38.604956   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.609011   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.609077   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.654662   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:53:38.662094   54452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:53:38.666110   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 08:53:38.707066   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 08:53:38.748028   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 08:53:38.790291   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 08:53:38.831326   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 08:53:38.872506   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 08:53:38.913738   54452 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:38.913828   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:53:38.913894   54452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:53:38.941817   54452 cri.go:89] found id: ""
	I1206 08:53:38.941888   54452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:53:38.949650   54452 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 08:53:38.949660   54452 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 08:53:38.949712   54452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 08:53:38.957046   54452 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:38.957552   54452 kubeconfig.go:125] found "functional-090986" server: "https://192.168.49.2:8441"
	I1206 08:53:38.960001   54452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 08:53:38.973807   54452 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 08:39:02.953222088 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 08:53:37.265532344 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 08:53:38.973835   54452 kubeadm.go:1161] stopping kube-system containers ...
	I1206 08:53:38.973855   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 08:53:38.973990   54452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:53:39.006630   54452 cri.go:89] found id: ""
	I1206 08:53:39.006691   54452 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 08:53:39.027188   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:53:39.035115   54452 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  6 08:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 08:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 08:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  6 08:43 /etc/kubernetes/scheduler.conf
	
	I1206 08:53:39.035195   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:53:39.043346   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:53:39.051128   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.051184   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:53:39.058808   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:53:39.066431   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.066486   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:53:39.074261   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:53:39.082004   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.082060   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:53:39.089693   54452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:53:39.097973   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:39.144114   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.034967   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.247090   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.303335   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.358218   54452 api_server.go:52] waiting for apiserver process to appear ...
	I1206 08:53:40.358284   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:40.858753   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:41.358700   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:41.858760   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:42.359143   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:42.859214   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:43.358859   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:43.858475   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:44.358512   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:44.859201   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:45.358789   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:45.858829   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:46.358595   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:46.858465   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:47.358809   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:47.858516   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:48.358367   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:48.859203   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:49.359207   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:49.858491   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:50.359361   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:50.859136   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:51.358696   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:51.858427   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:52.358504   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:52.858356   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:53.359243   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:53.859142   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:54.359242   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:54.859316   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:55.359059   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:55.858609   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:56.359350   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:56.859078   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:57.359214   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:57.859097   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:58.359174   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:58.858946   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:59.358533   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:59.859078   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:00.358576   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:00.859407   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:01.358874   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:01.858512   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:02.358441   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:02.858517   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:03.359363   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:03.859400   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:04.359276   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:04.859156   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:05.358974   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:05.858357   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:06.359182   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:06.859168   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:07.359160   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:07.859209   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:08.359310   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:08.859102   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:09.358600   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:09.859219   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:10.359034   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:10.858816   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:11.358429   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:11.858433   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:12.359162   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:12.859196   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:13.358899   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:13.858468   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:14.359028   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:14.858481   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:15.359221   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:15.858792   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:16.358493   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:16.859448   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:17.359360   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:17.859153   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:18.358389   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:18.859216   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:19.359289   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:19.858488   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:20.359257   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:20.859245   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:21.359184   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:21.859040   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:22.358496   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:22.859325   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:23.358553   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:23.858649   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:24.358999   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:24.858487   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:25.359321   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:25.859061   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:26.358793   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:26.858844   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:27.358536   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:27.859274   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:28.359019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:28.858738   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:29.359019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:29.858548   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:30.358369   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:30.859081   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:31.359088   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:31.858895   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:32.359444   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:32.859328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:33.359199   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:33.858413   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:34.358493   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:34.858487   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:35.359338   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:35.858497   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:36.358475   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:36.858480   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:37.359209   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:37.858485   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:38.359088   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:38.858716   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:39.358992   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:39.859022   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:40.358688   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:40.358791   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:40.388106   54452 cri.go:89] found id: ""
	I1206 08:54:40.388120   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.388134   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:40.388140   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:40.388201   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:40.412432   54452 cri.go:89] found id: ""
	I1206 08:54:40.412446   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.412453   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:40.412458   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:40.412515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:40.436247   54452 cri.go:89] found id: ""
	I1206 08:54:40.436261   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.436268   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:40.436274   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:40.436334   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:40.461648   54452 cri.go:89] found id: ""
	I1206 08:54:40.461662   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.461669   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:40.461674   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:40.461731   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:40.490826   54452 cri.go:89] found id: ""
	I1206 08:54:40.490840   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.490846   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:40.490851   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:40.490912   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:40.517246   54452 cri.go:89] found id: ""
	I1206 08:54:40.517259   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.517266   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:40.517272   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:40.517331   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:40.542129   54452 cri.go:89] found id: ""
	I1206 08:54:40.542144   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.542150   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:40.542157   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:40.542167   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:40.599816   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:40.599836   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:40.610692   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:40.610709   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:40.681214   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:40.671721   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673072   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673914   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.675628   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.676278   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:40.671721   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673072   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673914   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.675628   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.676278   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:40.681229   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:40.681240   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:40.746611   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:40.746631   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:43.275588   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:43.286822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:43.286894   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:43.313760   54452 cri.go:89] found id: ""
	I1206 08:54:43.313779   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.313786   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:43.313793   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:43.313852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:43.338174   54452 cri.go:89] found id: ""
	I1206 08:54:43.338188   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.338203   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:43.338208   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:43.338278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:43.362249   54452 cri.go:89] found id: ""
	I1206 08:54:43.362263   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.362270   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:43.362275   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:43.362333   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:43.386332   54452 cri.go:89] found id: ""
	I1206 08:54:43.386345   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.386353   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:43.386358   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:43.386413   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:43.413265   54452 cri.go:89] found id: ""
	I1206 08:54:43.413278   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.413285   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:43.413290   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:43.413346   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:43.437411   54452 cri.go:89] found id: ""
	I1206 08:54:43.437424   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.437431   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:43.437436   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:43.437497   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:43.463006   54452 cri.go:89] found id: ""
	I1206 08:54:43.463019   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.463046   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:43.463054   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:43.463065   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:43.531909   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:43.523361   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.524077   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525611   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525984   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.527554   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:43.523361   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.524077   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525611   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525984   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.527554   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:43.531920   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:43.531930   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:43.596428   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:43.596447   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:43.625653   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:43.625669   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:43.685656   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:43.685675   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:46.197048   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:46.207403   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:46.207468   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:46.259332   54452 cri.go:89] found id: ""
	I1206 08:54:46.259345   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.259361   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:46.259367   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:46.259453   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:46.293591   54452 cri.go:89] found id: ""
	I1206 08:54:46.293604   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.293611   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:46.293616   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:46.293674   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:46.324320   54452 cri.go:89] found id: ""
	I1206 08:54:46.324333   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.324340   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:46.324345   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:46.324403   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:46.349505   54452 cri.go:89] found id: ""
	I1206 08:54:46.349519   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.349526   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:46.349531   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:46.349592   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:46.372944   54452 cri.go:89] found id: ""
	I1206 08:54:46.372958   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.372965   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:46.372970   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:46.373028   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:46.397863   54452 cri.go:89] found id: ""
	I1206 08:54:46.397876   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.397884   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:46.397889   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:46.397947   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:46.423405   54452 cri.go:89] found id: ""
	I1206 08:54:46.423419   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.423426   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:46.423434   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:46.423444   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:46.479557   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:46.479577   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:46.490975   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:46.490992   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:46.555476   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:46.546289   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.547116   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.548919   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.549655   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.551369   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:46.546289   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.547116   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.548919   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.549655   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.551369   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:46.555486   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:46.555499   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:46.617650   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:46.617666   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:49.145146   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:49.156935   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:49.157011   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:49.181313   54452 cri.go:89] found id: ""
	I1206 08:54:49.181327   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.181334   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:49.181339   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:49.181396   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:49.205770   54452 cri.go:89] found id: ""
	I1206 08:54:49.205783   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.205792   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:49.205797   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:49.205854   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:49.246208   54452 cri.go:89] found id: ""
	I1206 08:54:49.246232   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.246240   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:49.246245   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:49.246312   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:49.276707   54452 cri.go:89] found id: ""
	I1206 08:54:49.276720   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.276739   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:49.276744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:49.276817   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:49.304665   54452 cri.go:89] found id: ""
	I1206 08:54:49.304684   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.304691   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:49.304696   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:49.304754   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:49.329874   54452 cri.go:89] found id: ""
	I1206 08:54:49.329888   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.329895   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:49.329901   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:49.329967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:49.355459   54452 cri.go:89] found id: ""
	I1206 08:54:49.355473   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.355480   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:49.355487   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:49.355503   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:49.383334   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:49.383349   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:49.438134   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:49.438151   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:49.449298   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:49.449313   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:49.517360   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:49.507622   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.508394   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510126   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510650   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.512155   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:49.507622   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.508394   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510126   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510650   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.512155   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:49.517370   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:49.517380   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:52.080828   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:52.091103   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:52.091181   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:52.116535   54452 cri.go:89] found id: ""
	I1206 08:54:52.116549   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.116556   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:52.116570   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:52.116633   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:52.142398   54452 cri.go:89] found id: ""
	I1206 08:54:52.142412   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.142424   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:52.142429   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:52.142485   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:52.169937   54452 cri.go:89] found id: ""
	I1206 08:54:52.169951   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.169958   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:52.169963   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:52.170020   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:52.200818   54452 cri.go:89] found id: ""
	I1206 08:54:52.200832   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.200838   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:52.200843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:52.200899   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:52.228819   54452 cri.go:89] found id: ""
	I1206 08:54:52.228833   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.228841   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:52.228846   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:52.228908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:52.258951   54452 cri.go:89] found id: ""
	I1206 08:54:52.258964   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.258972   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:52.258977   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:52.259042   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:52.294986   54452 cri.go:89] found id: ""
	I1206 08:54:52.295000   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.295007   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:52.295015   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:52.295025   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:52.362225   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:52.362245   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:52.389713   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:52.389729   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:52.445119   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:52.445137   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:52.458958   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:52.458980   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:52.523486   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:52.514851   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.515698   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517261   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517893   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.519458   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:52.514851   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.515698   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517261   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517893   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.519458   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:55.023766   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:55.034751   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:55.034820   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:55.060938   54452 cri.go:89] found id: ""
	I1206 08:54:55.060952   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.060960   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:55.060965   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:55.061025   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:55.086352   54452 cri.go:89] found id: ""
	I1206 08:54:55.086365   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.086383   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:55.086389   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:55.086457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:55.111318   54452 cri.go:89] found id: ""
	I1206 08:54:55.111334   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.111341   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:55.111346   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:55.111427   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:55.140103   54452 cri.go:89] found id: ""
	I1206 08:54:55.140118   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.140125   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:55.140130   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:55.140194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:55.164478   54452 cri.go:89] found id: ""
	I1206 08:54:55.164492   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.164500   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:55.164505   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:55.164565   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:55.191182   54452 cri.go:89] found id: ""
	I1206 08:54:55.191195   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.191203   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:55.191209   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:55.191266   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:55.216083   54452 cri.go:89] found id: ""
	I1206 08:54:55.216097   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.216104   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:55.216111   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:55.216122   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:55.303982   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:55.294944   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.295756   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.297492   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.298117   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.299945   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:55.294944   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.295756   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.297492   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.298117   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.299945   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:55.303992   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:55.304003   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:55.365857   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:55.365875   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:55.393911   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:55.393928   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:55.455110   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:55.455129   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:57.967188   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:57.977408   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:57.977467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:58.003574   54452 cri.go:89] found id: ""
	I1206 08:54:58.003588   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.003596   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:58.003601   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:58.003662   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:58.029323   54452 cri.go:89] found id: ""
	I1206 08:54:58.029337   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.029344   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:58.029348   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:58.029408   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:58.054996   54452 cri.go:89] found id: ""
	I1206 08:54:58.055010   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.055018   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:58.055023   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:58.055087   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:58.079698   54452 cri.go:89] found id: ""
	I1206 08:54:58.079711   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.079718   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:58.079723   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:58.079785   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:58.106383   54452 cri.go:89] found id: ""
	I1206 08:54:58.106396   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.106403   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:58.106408   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:58.106467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:58.135301   54452 cri.go:89] found id: ""
	I1206 08:54:58.135315   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.135325   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:58.135330   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:58.135431   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:58.165240   54452 cri.go:89] found id: ""
	I1206 08:54:58.165255   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.165262   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:58.165269   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:58.165279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:58.176468   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:58.176483   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:58.263783   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:58.246836   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.247297   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.255628   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.256461   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.259475   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:58.246836   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.247297   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.255628   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.256461   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.259475   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:58.263793   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:58.263806   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:58.336059   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:58.336078   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:58.364550   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:58.364565   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:00.926395   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:00.936607   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:00.936669   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:00.961767   54452 cri.go:89] found id: ""
	I1206 08:55:00.961781   54452 logs.go:282] 0 containers: []
	W1206 08:55:00.961788   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:00.961793   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:00.961855   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:00.987655   54452 cri.go:89] found id: ""
	I1206 08:55:00.987671   54452 logs.go:282] 0 containers: []
	W1206 08:55:00.987678   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:00.987684   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:00.987753   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:01.017321   54452 cri.go:89] found id: ""
	I1206 08:55:01.017335   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.017342   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:01.017347   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:01.017405   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:01.043120   54452 cri.go:89] found id: ""
	I1206 08:55:01.043134   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.043140   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:01.043146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:01.043208   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:01.069934   54452 cri.go:89] found id: ""
	I1206 08:55:01.069951   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.069958   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:01.069967   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:01.070037   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:01.095743   54452 cri.go:89] found id: ""
	I1206 08:55:01.095757   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.095765   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:01.095772   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:01.095832   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:01.120915   54452 cri.go:89] found id: ""
	I1206 08:55:01.120933   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.120940   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:01.120948   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:01.120958   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:01.179366   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:01.179392   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:01.191802   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:01.191818   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:01.292667   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:01.282943   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284116   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284837   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.286639   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.287228   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:01.282943   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284116   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284837   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.286639   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.287228   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:01.292676   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:01.292687   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:01.357710   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:01.357729   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:03.889702   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:03.900135   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:03.900194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:03.926097   54452 cri.go:89] found id: ""
	I1206 08:55:03.926122   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.926129   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:03.926135   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:03.926204   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:03.950796   54452 cri.go:89] found id: ""
	I1206 08:55:03.950810   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.950818   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:03.950823   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:03.950881   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:03.976998   54452 cri.go:89] found id: ""
	I1206 08:55:03.977012   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.977018   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:03.977024   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:03.977083   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:04.004847   54452 cri.go:89] found id: ""
	I1206 08:55:04.004862   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.004870   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:04.004876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:04.004943   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:04.030715   54452 cri.go:89] found id: ""
	I1206 08:55:04.030729   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.030737   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:04.030742   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:04.030806   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:04.056324   54452 cri.go:89] found id: ""
	I1206 08:55:04.056338   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.056345   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:04.056351   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:04.056412   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:04.082124   54452 cri.go:89] found id: ""
	I1206 08:55:04.082137   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.082145   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:04.082152   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:04.082163   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:04.138719   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:04.138737   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:04.150252   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:04.150269   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:04.220848   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:04.209917   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.210563   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212138   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212692   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.214187   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:04.209917   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.210563   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212138   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212692   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.214187   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:04.220858   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:04.220868   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:04.293646   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:04.293665   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:06.823180   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:06.833518   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:06.833576   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:06.863092   54452 cri.go:89] found id: ""
	I1206 08:55:06.863106   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.863113   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:06.863119   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:06.863177   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:06.888504   54452 cri.go:89] found id: ""
	I1206 08:55:06.888519   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.888525   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:06.888530   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:06.888595   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:06.918175   54452 cri.go:89] found id: ""
	I1206 08:55:06.918189   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.918197   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:06.918202   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:06.918261   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:06.944460   54452 cri.go:89] found id: ""
	I1206 08:55:06.944473   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.944480   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:06.944485   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:06.944551   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:06.973765   54452 cri.go:89] found id: ""
	I1206 08:55:06.973778   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.973786   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:06.973791   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:06.973852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:06.999311   54452 cri.go:89] found id: ""
	I1206 08:55:06.999324   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.999331   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:06.999337   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:06.999415   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:07.027677   54452 cri.go:89] found id: ""
	I1206 08:55:07.027690   54452 logs.go:282] 0 containers: []
	W1206 08:55:07.027697   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:07.027705   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:07.027715   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:07.086320   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:07.086338   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:07.097607   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:07.097623   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:07.161897   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:07.153185   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.154007   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.155730   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.156339   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.158053   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:07.153185   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.154007   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.155730   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.156339   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.158053   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:07.161907   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:07.161919   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:07.224772   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:07.224792   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:09.768328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:09.778939   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:09.779000   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:09.805473   54452 cri.go:89] found id: ""
	I1206 08:55:09.805487   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.805494   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:09.805499   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:09.805557   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:09.830605   54452 cri.go:89] found id: ""
	I1206 08:55:09.830618   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.830625   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:09.830630   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:09.830689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:09.855855   54452 cri.go:89] found id: ""
	I1206 08:55:09.855869   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.855876   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:09.855881   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:09.855937   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:09.880900   54452 cri.go:89] found id: ""
	I1206 08:55:09.880913   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.880920   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:09.880925   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:09.880981   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:09.906796   54452 cri.go:89] found id: ""
	I1206 08:55:09.906810   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.906817   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:09.906822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:09.906882   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:09.932980   54452 cri.go:89] found id: ""
	I1206 08:55:09.932996   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.933004   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:09.933009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:09.933081   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:09.961870   54452 cri.go:89] found id: ""
	I1206 08:55:09.961884   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.961892   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:09.961900   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:09.961922   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:10.018106   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:10.018129   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:10.031414   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:10.031441   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:10.103678   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:10.092903   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.093952   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.095756   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.096440   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.098120   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:10.092903   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.093952   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.095756   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.096440   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.098120   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:10.103689   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:10.103700   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:10.167044   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:10.167063   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:12.697325   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:12.707894   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:12.707958   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:12.732888   54452 cri.go:89] found id: ""
	I1206 08:55:12.732902   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.732914   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:12.732919   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:12.732975   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:12.756939   54452 cri.go:89] found id: ""
	I1206 08:55:12.756953   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.756960   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:12.756965   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:12.757026   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:12.785954   54452 cri.go:89] found id: ""
	I1206 08:55:12.785967   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.785974   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:12.785979   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:12.786037   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:12.810560   54452 cri.go:89] found id: ""
	I1206 08:55:12.810574   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.810581   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:12.810586   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:12.810643   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:12.835829   54452 cri.go:89] found id: ""
	I1206 08:55:12.835844   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.835851   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:12.835856   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:12.835917   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:12.860638   54452 cri.go:89] found id: ""
	I1206 08:55:12.860653   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.860660   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:12.860665   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:12.860723   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:12.885721   54452 cri.go:89] found id: ""
	I1206 08:55:12.885734   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.885742   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:12.885750   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:12.885760   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:12.944772   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:12.944793   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:12.956560   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:12.956577   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:13.023566   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:13.013901   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.014692   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.016414   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.017110   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.019101   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:13.013901   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.014692   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.016414   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.017110   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.019101   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:13.023586   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:13.023596   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:13.086592   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:13.086612   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:15.617835   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:15.628437   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:15.628524   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:15.657209   54452 cri.go:89] found id: ""
	I1206 08:55:15.657223   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.657230   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:15.657235   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:15.657297   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:15.681664   54452 cri.go:89] found id: ""
	I1206 08:55:15.681678   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.681685   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:15.681690   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:15.681748   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:15.707568   54452 cri.go:89] found id: ""
	I1206 08:55:15.707581   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.707588   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:15.707594   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:15.707654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:15.733456   54452 cri.go:89] found id: ""
	I1206 08:55:15.733470   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.733493   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:15.733499   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:15.733558   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:15.758882   54452 cri.go:89] found id: ""
	I1206 08:55:15.758896   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.758903   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:15.758908   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:15.758967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:15.784184   54452 cri.go:89] found id: ""
	I1206 08:55:15.784198   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.784205   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:15.784210   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:15.784269   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:15.809166   54452 cri.go:89] found id: ""
	I1206 08:55:15.809178   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.809186   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:15.809194   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:15.809204   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:15.865479   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:15.865498   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:15.876370   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:15.876386   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:15.949255   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:15.940278   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.941080   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.942741   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.943482   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.945251   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:15.940278   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.941080   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.942741   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.943482   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.945251   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:15.949277   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:15.949289   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:16.012838   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:16.012858   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:18.547536   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:18.557857   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:18.557924   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:18.583107   54452 cri.go:89] found id: ""
	I1206 08:55:18.583120   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.583128   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:18.583132   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:18.583192   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:18.608251   54452 cri.go:89] found id: ""
	I1206 08:55:18.608264   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.608271   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:18.608276   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:18.608333   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:18.634059   54452 cri.go:89] found id: ""
	I1206 08:55:18.634073   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.634080   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:18.634085   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:18.634158   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:18.659252   54452 cri.go:89] found id: ""
	I1206 08:55:18.659266   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.659273   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:18.659278   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:18.659338   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:18.687529   54452 cri.go:89] found id: ""
	I1206 08:55:18.687542   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.687549   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:18.687554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:18.687611   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:18.716705   54452 cri.go:89] found id: ""
	I1206 08:55:18.716719   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.716726   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:18.716731   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:18.716790   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:18.743861   54452 cri.go:89] found id: ""
	I1206 08:55:18.743875   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.743882   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:18.743890   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:18.743900   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:18.800501   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:18.800520   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:18.811514   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:18.811531   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:18.877593   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:18.868814   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.869581   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871247   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871996   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.873734   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:18.868814   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.869581   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871247   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871996   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.873734   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:18.877603   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:18.877614   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:18.945147   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:18.945175   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:21.473372   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:21.484974   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:21.485036   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:21.516585   54452 cri.go:89] found id: ""
	I1206 08:55:21.516598   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.516606   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:21.516611   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:21.516670   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:21.543917   54452 cri.go:89] found id: ""
	I1206 08:55:21.543930   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.543937   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:21.543943   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:21.544006   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:21.581932   54452 cri.go:89] found id: ""
	I1206 08:55:21.581946   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.581953   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:21.581958   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:21.582017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:21.606796   54452 cri.go:89] found id: ""
	I1206 08:55:21.606810   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.606817   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:21.606822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:21.606885   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:21.632673   54452 cri.go:89] found id: ""
	I1206 08:55:21.632686   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.632693   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:21.632698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:21.632791   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:21.656595   54452 cri.go:89] found id: ""
	I1206 08:55:21.656609   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.656616   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:21.656621   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:21.656681   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:21.681710   54452 cri.go:89] found id: ""
	I1206 08:55:21.681723   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.681730   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:21.681738   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:21.681747   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:21.737731   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:21.737750   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:21.748929   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:21.748944   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:21.814714   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:21.804423   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.805260   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807123   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807866   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.809673   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:21.804423   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.805260   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807123   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807866   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.809673   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:21.814725   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:21.814737   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:21.878842   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:21.878860   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:24.408240   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:24.418359   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:24.418420   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:24.445088   54452 cri.go:89] found id: ""
	I1206 08:55:24.445102   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.445109   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:24.445115   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:24.445218   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:24.481785   54452 cri.go:89] found id: ""
	I1206 08:55:24.481799   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.481807   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:24.481812   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:24.481871   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:24.514861   54452 cri.go:89] found id: ""
	I1206 08:55:24.514875   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.514882   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:24.514888   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:24.514951   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:24.545514   54452 cri.go:89] found id: ""
	I1206 08:55:24.545528   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.545535   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:24.545540   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:24.545604   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:24.571688   54452 cri.go:89] found id: ""
	I1206 08:55:24.571703   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.571710   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:24.571715   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:24.571780   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:24.596172   54452 cri.go:89] found id: ""
	I1206 08:55:24.596192   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.596200   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:24.596205   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:24.596267   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:24.621684   54452 cri.go:89] found id: ""
	I1206 08:55:24.621698   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.621706   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:24.621713   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:24.621728   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:24.683261   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:24.683279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:24.717098   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:24.717115   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:24.774777   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:24.774797   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:24.786405   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:24.786422   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:24.852542   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:24.844316   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.844764   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846310   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846629   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.848126   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:24.844316   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.844764   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846310   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846629   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.848126   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:27.352798   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:27.363390   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:27.363453   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:27.390863   54452 cri.go:89] found id: ""
	I1206 08:55:27.390877   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.390884   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:27.390891   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:27.390950   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:27.419763   54452 cri.go:89] found id: ""
	I1206 08:55:27.419777   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.419784   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:27.419789   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:27.419843   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:27.443855   54452 cri.go:89] found id: ""
	I1206 08:55:27.443868   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.443875   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:27.443880   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:27.443937   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:27.472073   54452 cri.go:89] found id: ""
	I1206 08:55:27.472086   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.472093   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:27.472099   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:27.472157   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:27.505330   54452 cri.go:89] found id: ""
	I1206 08:55:27.505344   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.505352   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:27.505357   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:27.505414   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:27.533936   54452 cri.go:89] found id: ""
	I1206 08:55:27.533950   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.533957   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:27.533962   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:27.534017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:27.562283   54452 cri.go:89] found id: ""
	I1206 08:55:27.562296   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.562303   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:27.562311   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:27.562320   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:27.619092   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:27.619110   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:27.630324   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:27.630339   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:27.695241   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:27.686898   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.687546   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689114   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689707   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.691358   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:27.686898   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.687546   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689114   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689707   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.691358   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:27.695251   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:27.695266   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:27.757877   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:27.757895   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:30.286157   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:30.296567   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:30.296625   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:30.321390   54452 cri.go:89] found id: ""
	I1206 08:55:30.321405   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.321413   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:30.321418   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:30.321480   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:30.350054   54452 cri.go:89] found id: ""
	I1206 08:55:30.350068   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.350075   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:30.350083   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:30.350149   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:30.375330   54452 cri.go:89] found id: ""
	I1206 08:55:30.375350   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.375358   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:30.375363   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:30.375445   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:30.406133   54452 cri.go:89] found id: ""
	I1206 08:55:30.406146   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.406153   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:30.406158   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:30.406217   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:30.434180   54452 cri.go:89] found id: ""
	I1206 08:55:30.434195   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.434202   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:30.434207   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:30.434272   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:30.461023   54452 cri.go:89] found id: ""
	I1206 08:55:30.461037   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.461044   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:30.461049   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:30.461107   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:30.493229   54452 cri.go:89] found id: ""
	I1206 08:55:30.493243   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.493250   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:30.493268   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:30.493279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:30.556454   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:30.556473   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:30.567243   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:30.567258   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:30.630618   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:30.622515   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.623325   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.624965   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.625291   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.626789   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:30.622515   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.623325   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.624965   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.625291   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.626789   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:30.630628   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:30.630638   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:30.692365   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:30.692384   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:33.222243   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:33.233203   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:33.233264   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:33.259086   54452 cri.go:89] found id: ""
	I1206 08:55:33.259099   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.259107   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:33.259113   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:33.259175   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:33.285885   54452 cri.go:89] found id: ""
	I1206 08:55:33.285912   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.285920   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:33.285926   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:33.286002   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:33.313522   54452 cri.go:89] found id: ""
	I1206 08:55:33.313536   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.313543   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:33.313554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:33.313614   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:33.343303   54452 cri.go:89] found id: ""
	I1206 08:55:33.343318   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.343335   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:33.343341   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:33.343434   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:33.372461   54452 cri.go:89] found id: ""
	I1206 08:55:33.372475   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.372482   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:33.372488   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:33.372556   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:33.398660   54452 cri.go:89] found id: ""
	I1206 08:55:33.398674   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.398682   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:33.398695   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:33.398770   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:33.425653   54452 cri.go:89] found id: ""
	I1206 08:55:33.425667   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.425675   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:33.425683   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:33.425693   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:33.436575   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:33.436591   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:33.519919   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:33.509857   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.511357   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.512054   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.513835   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.514450   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:33.509857   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.511357   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.512054   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.513835   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.514450   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:33.519928   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:33.519939   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:33.584991   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:33.585010   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:33.617158   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:33.617175   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:36.180867   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:36.191295   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:36.191369   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:36.215504   54452 cri.go:89] found id: ""
	I1206 08:55:36.215518   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.215525   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:36.215530   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:36.215586   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:36.241860   54452 cri.go:89] found id: ""
	I1206 08:55:36.241874   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.241881   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:36.241886   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:36.241948   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:36.270206   54452 cri.go:89] found id: ""
	I1206 08:55:36.270220   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.270227   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:36.270232   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:36.270292   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:36.297638   54452 cri.go:89] found id: ""
	I1206 08:55:36.297651   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.297658   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:36.297663   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:36.297721   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:36.327655   54452 cri.go:89] found id: ""
	I1206 08:55:36.327681   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.327689   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:36.327694   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:36.327764   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:36.353797   54452 cri.go:89] found id: ""
	I1206 08:55:36.353811   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.353818   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:36.353825   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:36.353884   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:36.378781   54452 cri.go:89] found id: ""
	I1206 08:55:36.378795   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.378802   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:36.378810   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:36.378823   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:36.435517   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:36.435537   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:36.446663   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:36.446679   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:36.538183   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:36.527758   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.528583   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.530703   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.531276   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.534098   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:36.527758   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.528583   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.530703   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.531276   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.534098   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:36.538193   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:36.538203   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:36.601364   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:36.601383   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:39.129686   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:39.140306   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:39.140375   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:39.169862   54452 cri.go:89] found id: ""
	I1206 08:55:39.169876   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.169883   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:39.169889   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:39.169952   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:39.195755   54452 cri.go:89] found id: ""
	I1206 08:55:39.195771   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.195778   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:39.195784   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:39.195842   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:39.220719   54452 cri.go:89] found id: ""
	I1206 08:55:39.220732   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.220739   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:39.220744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:39.220801   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:39.249535   54452 cri.go:89] found id: ""
	I1206 08:55:39.249549   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.249556   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:39.249561   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:39.249620   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:39.281267   54452 cri.go:89] found id: ""
	I1206 08:55:39.281281   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.281288   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:39.281293   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:39.281379   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:39.306847   54452 cri.go:89] found id: ""
	I1206 08:55:39.306860   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.306867   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:39.306873   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:39.306933   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:39.334023   54452 cri.go:89] found id: ""
	I1206 08:55:39.334036   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.334057   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:39.334064   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:39.334073   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:39.363589   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:39.363604   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:39.420152   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:39.420169   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:39.430815   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:39.430830   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:39.513246   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:39.503975   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.504808   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.506588   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.507212   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.508933   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:39.503975   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.504808   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.506588   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.507212   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.508933   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:39.513256   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:39.513266   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:42.085786   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:42.098317   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:42.098387   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:42.134671   54452 cri.go:89] found id: ""
	I1206 08:55:42.134686   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.134695   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:42.134705   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:42.134775   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:42.167474   54452 cri.go:89] found id: ""
	I1206 08:55:42.167489   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.167498   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:42.167505   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:42.167575   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:42.202078   54452 cri.go:89] found id: ""
	I1206 08:55:42.202093   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.202100   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:42.202106   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:42.202171   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:42.228525   54452 cri.go:89] found id: ""
	I1206 08:55:42.228539   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.228546   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:42.228552   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:42.228621   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:42.257322   54452 cri.go:89] found id: ""
	I1206 08:55:42.257337   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.257344   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:42.257350   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:42.257457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:42.284221   54452 cri.go:89] found id: ""
	I1206 08:55:42.284235   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.284253   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:42.284259   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:42.284329   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:42.311654   54452 cri.go:89] found id: ""
	I1206 08:55:42.311668   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.311675   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:42.311683   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:42.311694   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:42.368273   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:42.368294   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:42.379477   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:42.379493   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:42.443515   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:42.434726   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.435504   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437161   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437774   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.439236   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:42.434726   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.435504   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437161   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437774   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.439236   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:42.443526   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:42.443543   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:42.512858   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:42.512878   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:45.043040   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:45.068009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:45.068076   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:45.109801   54452 cri.go:89] found id: ""
	I1206 08:55:45.109815   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.109823   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:45.109829   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:45.109896   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:45.149825   54452 cri.go:89] found id: ""
	I1206 08:55:45.149841   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.149849   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:45.149855   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:45.149929   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:45.187416   54452 cri.go:89] found id: ""
	I1206 08:55:45.187433   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.187441   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:45.187446   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:45.187520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:45.235891   54452 cri.go:89] found id: ""
	I1206 08:55:45.235908   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.235916   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:45.235922   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:45.236066   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:45.279650   54452 cri.go:89] found id: ""
	I1206 08:55:45.279665   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.279673   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:45.279681   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:45.279750   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:45.325794   54452 cri.go:89] found id: ""
	I1206 08:55:45.325844   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.325871   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:45.325893   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:45.325962   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:45.357237   54452 cri.go:89] found id: ""
	I1206 08:55:45.357251   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.357258   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:45.357266   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:45.357291   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:45.385704   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:45.385720   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:45.442819   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:45.442837   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:45.454504   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:45.454523   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:45.547110   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:45.538939   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.539311   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.540633   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.541395   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.542989   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:45.538939   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.539311   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.540633   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.541395   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.542989   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:45.547119   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:45.547133   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:48.116344   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:48.126956   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:48.127022   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:48.152657   54452 cri.go:89] found id: ""
	I1206 08:55:48.152671   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.152678   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:48.152684   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:48.152743   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:48.182395   54452 cri.go:89] found id: ""
	I1206 08:55:48.182409   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.182417   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:48.182422   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:48.182494   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:48.211297   54452 cri.go:89] found id: ""
	I1206 08:55:48.211310   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.211327   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:48.211333   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:48.211402   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:48.236544   54452 cri.go:89] found id: ""
	I1206 08:55:48.236558   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.236565   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:48.236571   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:48.236627   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:48.262553   54452 cri.go:89] found id: ""
	I1206 08:55:48.262570   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.262582   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:48.262587   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:48.262680   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:48.295466   54452 cri.go:89] found id: ""
	I1206 08:55:48.295488   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.295495   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:48.295506   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:48.295586   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:48.321818   54452 cri.go:89] found id: ""
	I1206 08:55:48.321830   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.321837   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:48.321845   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:48.321856   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:48.378211   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:48.378229   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:48.389232   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:48.389255   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:48.456700   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:48.448592   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.449583   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.450577   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.451171   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.452831   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:48.448592   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.449583   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.450577   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.451171   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.452831   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:48.456711   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:48.456720   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:48.523317   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:48.523335   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:51.052796   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:51.063850   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:51.063912   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:51.089613   54452 cri.go:89] found id: ""
	I1206 08:55:51.089628   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.089635   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:51.089643   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:51.089727   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:51.116588   54452 cri.go:89] found id: ""
	I1206 08:55:51.116601   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.116609   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:51.116614   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:51.116679   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:51.146172   54452 cri.go:89] found id: ""
	I1206 08:55:51.146186   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.146193   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:51.146199   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:51.146266   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:51.172046   54452 cri.go:89] found id: ""
	I1206 08:55:51.172071   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.172078   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:51.172084   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:51.172163   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:51.200464   54452 cri.go:89] found id: ""
	I1206 08:55:51.200477   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.200495   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:51.200501   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:51.200561   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:51.229170   54452 cri.go:89] found id: ""
	I1206 08:55:51.229184   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.229191   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:51.229196   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:51.229254   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:51.254375   54452 cri.go:89] found id: ""
	I1206 08:55:51.254389   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.254396   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:51.254403   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:51.254413   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:51.317370   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:51.317390   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:51.344624   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:51.344642   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:51.402739   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:51.402759   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:51.413613   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:51.413629   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:51.483207   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:51.470850   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.471424   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.472991   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.473416   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.475113   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:51.470850   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.471424   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.472991   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.473416   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.475113   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:53.983859   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:53.997260   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:53.997326   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:54.024774   54452 cri.go:89] found id: ""
	I1206 08:55:54.024788   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.024795   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:54.024801   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:54.024866   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:54.050802   54452 cri.go:89] found id: ""
	I1206 08:55:54.050830   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.050837   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:54.050842   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:54.050911   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:54.079419   54452 cri.go:89] found id: ""
	I1206 08:55:54.079433   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.079440   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:54.079446   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:54.079517   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:54.104851   54452 cri.go:89] found id: ""
	I1206 08:55:54.104864   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.104871   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:54.104876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:54.104933   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:54.133815   54452 cri.go:89] found id: ""
	I1206 08:55:54.133829   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.133847   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:54.133853   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:54.133909   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:54.163047   54452 cri.go:89] found id: ""
	I1206 08:55:54.163071   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.163078   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:54.163083   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:54.163150   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:54.190227   54452 cri.go:89] found id: ""
	I1206 08:55:54.190242   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.190249   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:54.190263   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:54.190273   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:54.246189   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:54.246208   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:54.257068   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:54.257083   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:54.322094   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:54.313214   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.313895   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.315763   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.316388   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.318125   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:54.313214   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.313895   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.315763   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.316388   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.318125   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:54.322104   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:54.322114   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:54.385131   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:54.385150   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:56.917265   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:56.927438   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:56.927499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:56.951596   54452 cri.go:89] found id: ""
	I1206 08:55:56.951611   54452 logs.go:282] 0 containers: []
	W1206 08:55:56.951618   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:56.951623   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:56.951685   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:56.975635   54452 cri.go:89] found id: ""
	I1206 08:55:56.975649   54452 logs.go:282] 0 containers: []
	W1206 08:55:56.975656   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:56.975661   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:56.975718   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:57.005275   54452 cri.go:89] found id: ""
	I1206 08:55:57.005289   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.005296   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:57.005302   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:57.005370   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:57.031301   54452 cri.go:89] found id: ""
	I1206 08:55:57.031315   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.031333   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:57.031339   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:57.031422   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:57.057133   54452 cri.go:89] found id: ""
	I1206 08:55:57.057146   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.057153   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:57.057159   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:57.057221   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:57.081358   54452 cri.go:89] found id: ""
	I1206 08:55:57.081371   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.081378   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:57.081384   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:57.081442   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:57.116018   54452 cri.go:89] found id: ""
	I1206 08:55:57.116033   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.116049   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:57.116057   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:57.116067   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:57.171598   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:57.171615   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:57.182153   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:57.182169   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:57.245457   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:57.237416   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.237828   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.239402   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.240057   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.241674   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:57.237416   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.237828   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.239402   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.240057   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.241674   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:57.245466   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:57.245476   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:57.307969   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:57.307987   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:59.836840   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:59.846983   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:59.847044   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:59.871818   54452 cri.go:89] found id: ""
	I1206 08:55:59.871831   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.871838   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:59.871844   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:59.871904   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:59.896695   54452 cri.go:89] found id: ""
	I1206 08:55:59.896709   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.896716   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:59.896721   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:59.896787   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:59.921887   54452 cri.go:89] found id: ""
	I1206 08:55:59.921911   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.921918   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:59.921924   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:59.921998   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:59.948824   54452 cri.go:89] found id: ""
	I1206 08:55:59.948837   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.948845   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:59.948850   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:59.948908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:59.974553   54452 cri.go:89] found id: ""
	I1206 08:55:59.974567   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.974575   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:59.974580   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:59.974638   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:00.057731   54452 cri.go:89] found id: ""
	I1206 08:56:00.057783   54452 logs.go:282] 0 containers: []
	W1206 08:56:00.057791   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:00.057798   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:00.058035   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:00.191639   54452 cri.go:89] found id: ""
	I1206 08:56:00.191655   54452 logs.go:282] 0 containers: []
	W1206 08:56:00.191663   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:00.191671   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:00.191685   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:00.488607   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:00.462504   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.463297   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477164   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477991   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.479845   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:00.462504   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.463297   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477164   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477991   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.479845   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:00.488619   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:00.488632   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:00.602413   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:00.602434   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:00.637181   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:00.637200   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:00.701850   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:00.701868   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:03.215126   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:03.225397   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:03.225464   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:03.253115   54452 cri.go:89] found id: ""
	I1206 08:56:03.253128   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.253135   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:03.253143   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:03.253203   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:03.278704   54452 cri.go:89] found id: ""
	I1206 08:56:03.278717   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.278724   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:03.278730   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:03.278788   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:03.304400   54452 cri.go:89] found id: ""
	I1206 08:56:03.304414   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.304421   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:03.304427   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:03.304484   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:03.330915   54452 cri.go:89] found id: ""
	I1206 08:56:03.330927   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.330934   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:03.330939   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:03.331000   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:03.356123   54452 cri.go:89] found id: ""
	I1206 08:56:03.356136   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.356143   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:03.356149   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:03.356205   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:03.381497   54452 cri.go:89] found id: ""
	I1206 08:56:03.381511   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.381517   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:03.381523   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:03.381582   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:03.405821   54452 cri.go:89] found id: ""
	I1206 08:56:03.405834   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.405841   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:03.405849   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:03.405859   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:03.462897   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:03.462918   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:03.474378   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:03.474393   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:03.559522   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:03.549699   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.550344   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.552761   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.554016   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.555406   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:03.549699   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.550344   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.552761   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.554016   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.555406   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:03.559532   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:03.559545   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:03.626698   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:03.626716   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:06.154123   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:06.164837   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:06.164908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:06.191102   54452 cri.go:89] found id: ""
	I1206 08:56:06.191115   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.191123   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:06.191128   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:06.191194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:06.215815   54452 cri.go:89] found id: ""
	I1206 08:56:06.215829   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.215836   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:06.215841   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:06.215901   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:06.241431   54452 cri.go:89] found id: ""
	I1206 08:56:06.241445   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.241452   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:06.241457   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:06.241520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:06.266677   54452 cri.go:89] found id: ""
	I1206 08:56:06.266692   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.266699   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:06.266705   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:06.266768   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:06.290924   54452 cri.go:89] found id: ""
	I1206 08:56:06.290940   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.290948   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:06.290953   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:06.291015   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:06.315767   54452 cri.go:89] found id: ""
	I1206 08:56:06.315781   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.315788   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:06.315794   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:06.315852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:06.341271   54452 cri.go:89] found id: ""
	I1206 08:56:06.341284   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.341291   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:06.341298   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:06.341309   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:06.369777   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:06.369793   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:06.426976   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:06.426995   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:06.438111   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:06.438126   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:06.515349   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:06.504075   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.504993   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.506819   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.507499   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.510593   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:06.504075   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.504993   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.506819   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.507499   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.510593   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:06.515366   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:06.515403   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:09.084957   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:09.095918   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:09.095982   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:09.122788   54452 cri.go:89] found id: ""
	I1206 08:56:09.122802   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.122816   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:09.122822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:09.122886   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:09.150280   54452 cri.go:89] found id: ""
	I1206 08:56:09.150296   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.150303   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:09.150308   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:09.150370   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:09.175968   54452 cri.go:89] found id: ""
	I1206 08:56:09.175982   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.175989   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:09.175995   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:09.176054   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:09.205200   54452 cri.go:89] found id: ""
	I1206 08:56:09.205214   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.205221   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:09.205226   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:09.205284   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:09.229722   54452 cri.go:89] found id: ""
	I1206 08:56:09.229741   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.229758   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:09.229764   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:09.229823   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:09.253449   54452 cri.go:89] found id: ""
	I1206 08:56:09.253462   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.253469   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:09.253475   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:09.253532   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:09.278075   54452 cri.go:89] found id: ""
	I1206 08:56:09.278096   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.278103   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:09.278111   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:09.278127   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:09.334207   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:09.334224   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:09.345268   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:09.345284   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:09.411030   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:09.402872   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.403332   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.404994   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.405444   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.406900   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:09.402872   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.403332   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.404994   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.405444   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.406900   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:09.411046   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:09.411057   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:09.477250   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:09.477268   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:12.012172   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:12.023603   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:12.023666   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:12.049522   54452 cri.go:89] found id: ""
	I1206 08:56:12.049536   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.049544   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:12.049549   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:12.049616   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:12.079322   54452 cri.go:89] found id: ""
	I1206 08:56:12.079336   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.079343   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:12.079348   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:12.079434   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:12.104615   54452 cri.go:89] found id: ""
	I1206 08:56:12.104629   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.104636   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:12.104642   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:12.104698   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:12.129522   54452 cri.go:89] found id: ""
	I1206 08:56:12.129536   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.129542   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:12.129548   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:12.129603   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:12.154617   54452 cri.go:89] found id: ""
	I1206 08:56:12.154631   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.154637   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:12.154642   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:12.154701   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:12.180772   54452 cri.go:89] found id: ""
	I1206 08:56:12.180786   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.180793   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:12.180798   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:12.180860   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:12.204559   54452 cri.go:89] found id: ""
	I1206 08:56:12.204573   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.204585   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:12.204593   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:12.204605   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:12.267761   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:12.267780   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:12.295680   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:12.295696   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:12.355740   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:12.355759   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:12.367574   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:12.367589   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:12.438034   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:12.429169   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.429845   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.431592   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.432279   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.433870   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:12.429169   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.429845   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.431592   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.432279   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.433870   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:14.938326   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:14.948550   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:14.948610   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:14.974812   54452 cri.go:89] found id: ""
	I1206 08:56:14.974825   54452 logs.go:282] 0 containers: []
	W1206 08:56:14.974832   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:14.974843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:14.974901   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:15.033969   54452 cri.go:89] found id: ""
	I1206 08:56:15.033985   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.034002   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:15.034009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:15.034081   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:15.061932   54452 cri.go:89] found id: ""
	I1206 08:56:15.061946   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.061954   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:15.061959   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:15.062054   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:15.092717   54452 cri.go:89] found id: ""
	I1206 08:56:15.092731   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.092738   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:15.092744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:15.092804   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:15.119219   54452 cri.go:89] found id: ""
	I1206 08:56:15.119234   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.119242   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:15.119247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:15.119309   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:15.149464   54452 cri.go:89] found id: ""
	I1206 08:56:15.149477   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.149485   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:15.149490   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:15.149550   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:15.175614   54452 cri.go:89] found id: ""
	I1206 08:56:15.175628   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.175635   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:15.175643   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:15.175653   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:15.239770   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:15.239789   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:15.267874   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:15.267891   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:15.327229   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:15.327247   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:15.338540   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:15.338557   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:15.402152   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:15.393377   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.393759   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395003   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395462   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.397184   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:15.393377   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.393759   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395003   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395462   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.397184   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:17.903812   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:17.914165   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:17.914229   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:17.942343   54452 cri.go:89] found id: ""
	I1206 08:56:17.942357   54452 logs.go:282] 0 containers: []
	W1206 08:56:17.942363   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:17.942369   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:17.942427   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:17.972379   54452 cri.go:89] found id: ""
	I1206 08:56:17.972394   54452 logs.go:282] 0 containers: []
	W1206 08:56:17.972401   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:17.972406   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:17.972474   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:18.000726   54452 cri.go:89] found id: ""
	I1206 08:56:18.000740   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.000762   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:18.000768   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:18.000832   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:18.027348   54452 cri.go:89] found id: ""
	I1206 08:56:18.027406   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.027418   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:18.027431   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:18.027515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:18.055911   54452 cri.go:89] found id: ""
	I1206 08:56:18.055925   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.055933   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:18.055937   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:18.055994   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:18.085367   54452 cri.go:89] found id: ""
	I1206 08:56:18.085381   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.085392   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:18.085398   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:18.085466   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:18.110486   54452 cri.go:89] found id: ""
	I1206 08:56:18.110505   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.110513   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:18.110520   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:18.110531   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:18.174849   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:18.166389   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.166788   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168371   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168921   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.170369   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:18.166389   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.166788   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168371   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168921   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.170369   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:18.174859   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:18.174870   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:18.237754   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:18.237774   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:18.268012   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:18.268033   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:18.324652   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:18.324671   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:20.837649   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:20.848772   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:20.848844   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:20.875162   54452 cri.go:89] found id: ""
	I1206 08:56:20.875177   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.875184   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:20.875190   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:20.875260   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:20.900599   54452 cri.go:89] found id: ""
	I1206 08:56:20.900613   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.900620   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:20.900625   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:20.900683   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:20.928195   54452 cri.go:89] found id: ""
	I1206 08:56:20.928209   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.928216   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:20.928221   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:20.928288   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:20.952510   54452 cri.go:89] found id: ""
	I1206 08:56:20.952524   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.952532   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:20.952537   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:20.952594   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:20.976651   54452 cri.go:89] found id: ""
	I1206 08:56:20.976665   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.976672   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:20.976677   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:20.976747   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:21.003279   54452 cri.go:89] found id: ""
	I1206 08:56:21.003294   54452 logs.go:282] 0 containers: []
	W1206 08:56:21.003301   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:21.003306   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:21.003372   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:21.029382   54452 cri.go:89] found id: ""
	I1206 08:56:21.029396   54452 logs.go:282] 0 containers: []
	W1206 08:56:21.029403   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:21.029411   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:21.029421   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:21.091035   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:21.082849   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.083705   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085252   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085569   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.087050   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:21.082849   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.083705   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085252   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085569   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.087050   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:21.091049   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:21.091059   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:21.153084   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:21.153102   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:21.179992   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:21.180009   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:21.242302   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:21.242323   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:23.753350   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:23.764153   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:23.764212   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:23.794093   54452 cri.go:89] found id: ""
	I1206 08:56:23.794108   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.794115   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:23.794121   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:23.794192   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:23.818597   54452 cri.go:89] found id: ""
	I1206 08:56:23.818611   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.818618   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:23.818623   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:23.818681   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:23.845861   54452 cri.go:89] found id: ""
	I1206 08:56:23.845875   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.845882   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:23.845887   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:23.845951   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:23.871357   54452 cri.go:89] found id: ""
	I1206 08:56:23.871371   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.871423   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:23.871428   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:23.871486   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:23.895904   54452 cri.go:89] found id: ""
	I1206 08:56:23.895918   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.895926   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:23.895931   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:23.895998   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:23.921905   54452 cri.go:89] found id: ""
	I1206 08:56:23.921918   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.921925   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:23.921931   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:23.921988   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:23.946488   54452 cri.go:89] found id: ""
	I1206 08:56:23.946512   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.946520   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:23.946529   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:23.946539   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:24.002888   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:24.002907   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:24.015146   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:24.015170   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:24.085686   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:24.074786   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.075755   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.078321   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.079336   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.080390   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:24.074786   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.075755   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.078321   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.079336   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.080390   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:24.085697   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:24.085707   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:24.149216   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:24.149233   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:26.686769   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:26.697125   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:26.697183   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:26.728496   54452 cri.go:89] found id: ""
	I1206 08:56:26.728510   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.728527   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:26.728532   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:26.728597   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:26.755101   54452 cri.go:89] found id: ""
	I1206 08:56:26.755115   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.755130   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:26.755136   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:26.755195   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:26.785198   54452 cri.go:89] found id: ""
	I1206 08:56:26.785211   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.785229   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:26.785234   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:26.785298   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:26.816431   54452 cri.go:89] found id: ""
	I1206 08:56:26.816445   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.816452   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:26.816457   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:26.816515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:26.841875   54452 cri.go:89] found id: ""
	I1206 08:56:26.841889   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.841897   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:26.841902   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:26.841964   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:26.868358   54452 cri.go:89] found id: ""
	I1206 08:56:26.868372   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.868379   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:26.868384   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:26.868456   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:26.895528   54452 cri.go:89] found id: ""
	I1206 08:56:26.895541   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.895547   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:26.895555   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:26.895564   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:26.961952   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:26.961970   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:27.006459   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:27.006475   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:27.063666   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:27.063685   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:27.074993   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:27.075011   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:27.138852   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:27.130623   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.131223   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.132971   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.133326   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.134833   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:27.130623   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.131223   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.132971   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.133326   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.134833   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:29.639504   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:29.649774   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:29.649848   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:29.679629   54452 cri.go:89] found id: ""
	I1206 08:56:29.679642   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.679650   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:29.679655   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:29.679716   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:29.704535   54452 cri.go:89] found id: ""
	I1206 08:56:29.704550   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.704557   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:29.704563   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:29.704635   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:29.737627   54452 cri.go:89] found id: ""
	I1206 08:56:29.737640   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.737647   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:29.737652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:29.737709   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:29.767083   54452 cri.go:89] found id: ""
	I1206 08:56:29.767097   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.767104   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:29.767109   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:29.767166   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:29.793665   54452 cri.go:89] found id: ""
	I1206 08:56:29.793685   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.793693   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:29.793698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:29.793761   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:29.822695   54452 cri.go:89] found id: ""
	I1206 08:56:29.822709   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.822717   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:29.822722   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:29.822781   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:29.848347   54452 cri.go:89] found id: ""
	I1206 08:56:29.848360   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.848380   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:29.848389   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:29.848399   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:29.911329   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:29.911349   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:29.939981   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:29.939996   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:30.001274   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:30.001296   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:30.022683   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:30.022703   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:30.138182   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:30.128285   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.129603   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.130253   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132024   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132540   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:30.128285   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.129603   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.130253   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132024   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132540   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:32.638423   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:32.648554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:32.648613   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:32.672719   54452 cri.go:89] found id: ""
	I1206 08:56:32.672733   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.672741   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:32.672745   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:32.672808   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:32.697375   54452 cri.go:89] found id: ""
	I1206 08:56:32.697389   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.697396   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:32.697401   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:32.697456   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:32.730608   54452 cri.go:89] found id: ""
	I1206 08:56:32.730621   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.730628   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:32.730633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:32.730690   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:32.756886   54452 cri.go:89] found id: ""
	I1206 08:56:32.756900   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.756906   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:32.756911   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:32.756967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:32.786416   54452 cri.go:89] found id: ""
	I1206 08:56:32.786429   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.786436   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:32.786441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:32.786499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:32.817852   54452 cri.go:89] found id: ""
	I1206 08:56:32.817866   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.817873   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:32.817878   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:32.817948   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:32.847789   54452 cri.go:89] found id: ""
	I1206 08:56:32.847803   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.847810   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:32.847817   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:32.847826   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:32.913422   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:32.904590   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.905149   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907029   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907428   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.909140   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:32.904590   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.905149   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907029   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907428   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.909140   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:32.913432   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:32.913443   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:32.979128   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:32.979147   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:33.009021   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:33.009038   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:33.066116   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:33.066134   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:35.577653   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:35.587677   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:35.587739   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:35.612385   54452 cri.go:89] found id: ""
	I1206 08:56:35.612398   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.612405   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:35.612416   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:35.612474   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:35.639348   54452 cri.go:89] found id: ""
	I1206 08:56:35.639362   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.639369   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:35.639395   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:35.639457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:35.662406   54452 cri.go:89] found id: ""
	I1206 08:56:35.662420   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.662427   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:35.662432   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:35.662494   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:35.686450   54452 cri.go:89] found id: ""
	I1206 08:56:35.686464   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.686471   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:35.686476   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:35.686535   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:35.715902   54452 cri.go:89] found id: ""
	I1206 08:56:35.715915   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.715922   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:35.715927   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:35.715986   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:35.753483   54452 cri.go:89] found id: ""
	I1206 08:56:35.753496   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.753503   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:35.753509   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:35.753571   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:35.787475   54452 cri.go:89] found id: ""
	I1206 08:56:35.787488   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.787495   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:35.787509   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:35.787520   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:35.799521   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:35.799536   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:35.865541   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:35.856956   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.857477   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859150   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859621   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.861412   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:35.856956   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.857477   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859150   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859621   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.861412   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:35.865551   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:35.865562   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:35.928394   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:35.928412   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:35.960163   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:35.960178   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:38.518969   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:38.529441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:38.529503   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:38.556742   54452 cri.go:89] found id: ""
	I1206 08:56:38.556756   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.556764   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:38.556769   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:38.556828   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:38.585575   54452 cri.go:89] found id: ""
	I1206 08:56:38.585589   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.585596   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:38.585602   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:38.585675   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:38.610698   54452 cri.go:89] found id: ""
	I1206 08:56:38.610713   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.610721   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:38.610726   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:38.610799   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:38.635789   54452 cri.go:89] found id: ""
	I1206 08:56:38.635802   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.635809   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:38.635814   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:38.635875   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:38.664415   54452 cri.go:89] found id: ""
	I1206 08:56:38.664429   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.664436   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:38.664441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:38.664499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:38.692373   54452 cri.go:89] found id: ""
	I1206 08:56:38.692387   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.692394   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:38.692400   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:38.692463   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:38.717762   54452 cri.go:89] found id: ""
	I1206 08:56:38.717776   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.717784   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:38.717791   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:38.717804   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:38.761801   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:38.761816   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:38.823195   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:38.823214   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:38.834338   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:38.834354   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:38.902350   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:38.894283   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895054   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895859   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897382   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897703   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:38.894283   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895054   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895859   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897382   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897703   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:38.902361   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:38.902372   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:41.468409   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:41.478754   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:41.478820   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:41.506969   54452 cri.go:89] found id: ""
	I1206 08:56:41.506982   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.506989   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:41.506997   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:41.507057   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:41.531981   54452 cri.go:89] found id: ""
	I1206 08:56:41.531995   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.532002   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:41.532007   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:41.532067   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:41.556489   54452 cri.go:89] found id: ""
	I1206 08:56:41.556503   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.556511   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:41.556516   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:41.556578   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:41.582188   54452 cri.go:89] found id: ""
	I1206 08:56:41.582202   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.582209   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:41.582224   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:41.582297   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:41.608043   54452 cri.go:89] found id: ""
	I1206 08:56:41.608065   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.608073   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:41.608078   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:41.608149   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:41.636701   54452 cri.go:89] found id: ""
	I1206 08:56:41.636714   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.636722   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:41.636728   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:41.636786   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:41.661109   54452 cri.go:89] found id: ""
	I1206 08:56:41.661123   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.661131   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:41.661138   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:41.661147   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:41.718276   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:41.718293   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:41.731689   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:41.731704   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:41.813161   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:41.804518   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.805060   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.806862   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.807552   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.809239   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:41.804518   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.805060   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.806862   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.807552   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.809239   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:41.813171   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:41.813183   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:41.879169   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:41.879189   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:44.409328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:44.419475   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:44.419534   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:44.444626   54452 cri.go:89] found id: ""
	I1206 08:56:44.444640   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.444647   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:44.444652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:44.444709   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:44.469065   54452 cri.go:89] found id: ""
	I1206 08:56:44.469078   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.469085   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:44.469090   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:44.469154   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:44.492979   54452 cri.go:89] found id: ""
	I1206 08:56:44.492993   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.493000   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:44.493006   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:44.493065   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:44.517980   54452 cri.go:89] found id: ""
	I1206 08:56:44.517994   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.518012   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:44.518018   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:44.518084   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:44.550302   54452 cri.go:89] found id: ""
	I1206 08:56:44.550315   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.550322   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:44.550338   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:44.550411   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:44.574741   54452 cri.go:89] found id: ""
	I1206 08:56:44.574754   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.574773   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:44.574779   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:44.574844   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:44.599427   54452 cri.go:89] found id: ""
	I1206 08:56:44.599440   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.599447   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:44.599454   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:44.599464   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:44.655195   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:44.655213   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:44.666596   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:44.666611   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:44.743689   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:44.734701   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.735711   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737288   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737597   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.739087   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:44.734701   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.735711   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737288   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737597   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.739087   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:44.743706   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:44.743716   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:44.813114   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:44.813132   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:47.340486   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:47.350443   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:47.350502   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:47.381645   54452 cri.go:89] found id: ""
	I1206 08:56:47.381659   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.381666   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:47.381671   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:47.381732   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:47.408660   54452 cri.go:89] found id: ""
	I1206 08:56:47.408674   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.408681   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:47.408686   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:47.408751   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:47.434188   54452 cri.go:89] found id: ""
	I1206 08:56:47.434201   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.434208   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:47.434213   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:47.434272   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:47.463313   54452 cri.go:89] found id: ""
	I1206 08:56:47.463334   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.463342   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:47.463347   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:47.463437   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:47.491850   54452 cri.go:89] found id: ""
	I1206 08:56:47.491864   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.491871   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:47.491876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:47.491942   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:47.520200   54452 cri.go:89] found id: ""
	I1206 08:56:47.520214   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.520221   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:47.520226   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:47.520289   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:47.546930   54452 cri.go:89] found id: ""
	I1206 08:56:47.546943   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.546950   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:47.546958   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:47.546969   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:47.607002   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:47.607020   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:47.617961   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:47.617976   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:47.681928   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:47.673776   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.674574   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676165   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676631   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.678134   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:47.673776   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.674574   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676165   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676631   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.678134   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:47.681938   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:47.681949   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:47.749465   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:47.749483   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:50.280242   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:50.291127   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:50.291189   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:50.316285   54452 cri.go:89] found id: ""
	I1206 08:56:50.316299   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.316307   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:50.316312   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:50.316378   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:50.342947   54452 cri.go:89] found id: ""
	I1206 08:56:50.342961   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.342968   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:50.342973   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:50.343034   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:50.368308   54452 cri.go:89] found id: ""
	I1206 08:56:50.368322   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.368329   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:50.368334   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:50.368392   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:50.392557   54452 cri.go:89] found id: ""
	I1206 08:56:50.392571   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.392578   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:50.392583   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:50.392643   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:50.417455   54452 cri.go:89] found id: ""
	I1206 08:56:50.417469   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.417477   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:50.417482   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:50.417547   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:50.442791   54452 cri.go:89] found id: ""
	I1206 08:56:50.442805   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.442813   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:50.442818   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:50.442887   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:50.473290   54452 cri.go:89] found id: ""
	I1206 08:56:50.473304   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.473310   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:50.473318   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:50.473329   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:50.484225   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:50.484242   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:50.551034   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:50.542777   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.543204   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.544973   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.545586   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.547123   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:50.542777   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.543204   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.544973   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.545586   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.547123   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:50.551048   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:50.551059   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:50.614007   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:50.614025   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:50.642494   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:50.642510   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:53.201231   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:53.211652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:53.211712   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:53.237084   54452 cri.go:89] found id: ""
	I1206 08:56:53.237098   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.237106   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:53.237117   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:53.237179   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:53.265518   54452 cri.go:89] found id: ""
	I1206 08:56:53.265533   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.265541   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:53.265547   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:53.265619   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:53.291219   54452 cri.go:89] found id: ""
	I1206 08:56:53.291233   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.291242   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:53.291247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:53.291304   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:53.316119   54452 cri.go:89] found id: ""
	I1206 08:56:53.316135   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.316143   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:53.316148   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:53.316208   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:53.345553   54452 cri.go:89] found id: ""
	I1206 08:56:53.345566   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.345574   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:53.345579   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:53.345637   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:53.374116   54452 cri.go:89] found id: ""
	I1206 08:56:53.374130   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.374138   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:53.374144   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:53.374201   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:53.401450   54452 cri.go:89] found id: ""
	I1206 08:56:53.401463   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.401470   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:53.401488   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:53.401498   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:53.464628   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:53.464645   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:53.492208   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:53.492225   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:53.548199   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:53.548216   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:53.559872   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:53.559887   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:53.624790   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:53.616289   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.617036   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.618638   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.619245   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.620839   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:53.616289   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.617036   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.618638   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.619245   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.620839   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:56.126662   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:56.136918   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:56.136978   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:56.165346   54452 cri.go:89] found id: ""
	I1206 08:56:56.165359   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.165376   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:56.165382   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:56.165447   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:56.194525   54452 cri.go:89] found id: ""
	I1206 08:56:56.194538   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.194545   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:56.194562   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:56.194621   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:56.220295   54452 cri.go:89] found id: ""
	I1206 08:56:56.220309   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.220316   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:56.220321   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:56.220377   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:56.244567   54452 cri.go:89] found id: ""
	I1206 08:56:56.244580   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.244587   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:56.244592   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:56.244648   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:56.267992   54452 cri.go:89] found id: ""
	I1206 08:56:56.268005   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.268012   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:56.268018   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:56.268076   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:56.295817   54452 cri.go:89] found id: ""
	I1206 08:56:56.295830   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.295837   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:56.295843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:56.295904   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:56.319421   54452 cri.go:89] found id: ""
	I1206 08:56:56.319435   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.319442   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:56.319450   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:56.319460   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:56.350423   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:56.350439   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:56.407158   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:56.407176   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:56.417732   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:56.417747   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:56.488632   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:56.480052   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.480705   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.482573   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.483242   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.484311   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:56.480052   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.480705   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.482573   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.483242   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.484311   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:56.488642   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:56.488652   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:59.061980   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:59.072278   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:59.072339   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:59.101215   54452 cri.go:89] found id: ""
	I1206 08:56:59.101228   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.101235   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:59.101241   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:59.101302   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:59.127327   54452 cri.go:89] found id: ""
	I1206 08:56:59.127342   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.127349   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:59.127355   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:59.127442   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:59.152367   54452 cri.go:89] found id: ""
	I1206 08:56:59.152381   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.152388   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:59.152393   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:59.152461   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:59.176595   54452 cri.go:89] found id: ""
	I1206 08:56:59.176609   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.176616   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:59.176622   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:59.176680   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:59.201640   54452 cri.go:89] found id: ""
	I1206 08:56:59.201654   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.201661   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:59.201667   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:59.201725   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:59.228000   54452 cri.go:89] found id: ""
	I1206 08:56:59.228015   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.228023   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:59.228028   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:59.228097   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:59.254668   54452 cri.go:89] found id: ""
	I1206 08:56:59.254681   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.254688   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:59.254696   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:59.254707   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:59.284894   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:59.284910   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:59.342586   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:59.342604   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:59.354343   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:59.354368   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:59.422837   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:59.414293   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.414916   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416482   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416892   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.418605   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:59.414293   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.414916   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416482   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416892   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.418605   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:59.422847   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:59.422857   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:01.987724   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:02.004462   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:02.004525   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:02.037544   54452 cri.go:89] found id: ""
	I1206 08:57:02.037558   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.037565   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:02.037571   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:02.037629   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:02.064737   54452 cri.go:89] found id: ""
	I1206 08:57:02.064750   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.064759   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:02.064765   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:02.064822   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:02.090594   54452 cri.go:89] found id: ""
	I1206 08:57:02.090607   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.090615   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:02.090620   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:02.090677   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:02.118059   54452 cri.go:89] found id: ""
	I1206 08:57:02.118073   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.118080   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:02.118086   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:02.118142   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:02.147171   54452 cri.go:89] found id: ""
	I1206 08:57:02.147184   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.147191   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:02.147197   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:02.147258   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:02.178322   54452 cri.go:89] found id: ""
	I1206 08:57:02.178336   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.178343   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:02.178349   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:02.178409   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:02.206125   54452 cri.go:89] found id: ""
	I1206 08:57:02.206140   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.206148   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:02.206156   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:02.206166   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:02.268742   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:02.268760   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:02.298364   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:02.298379   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:02.360782   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:02.360799   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:02.372144   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:02.372159   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:02.440932   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:02.432342   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.433106   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435042   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435754   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.436799   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:02.432342   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.433106   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435042   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435754   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.436799   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:04.941190   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:04.951545   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:04.951607   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:04.989383   54452 cri.go:89] found id: ""
	I1206 08:57:04.989398   54452 logs.go:282] 0 containers: []
	W1206 08:57:04.989406   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:04.989413   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:04.989480   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:05.024563   54452 cri.go:89] found id: ""
	I1206 08:57:05.024580   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.024588   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:05.024593   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:05.024654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:05.054247   54452 cri.go:89] found id: ""
	I1206 08:57:05.054260   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.054267   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:05.054272   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:05.054332   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:05.079563   54452 cri.go:89] found id: ""
	I1206 08:57:05.079582   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.079589   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:05.079594   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:05.079654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:05.104268   54452 cri.go:89] found id: ""
	I1206 08:57:05.104281   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.104288   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:05.104294   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:05.104354   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:05.133366   54452 cri.go:89] found id: ""
	I1206 08:57:05.133389   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.133399   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:05.133404   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:05.133473   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:05.157604   54452 cri.go:89] found id: ""
	I1206 08:57:05.157618   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.157625   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:05.157633   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:05.157644   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:05.169011   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:05.169026   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:05.232729   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:05.223674   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.224539   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226385   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226913   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.228611   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:05.223674   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.224539   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226385   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226913   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.228611   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:05.232739   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:05.232750   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:05.295112   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:05.295130   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:05.323164   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:05.323180   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:07.880424   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:07.890491   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:07.890546   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:07.919674   54452 cri.go:89] found id: ""
	I1206 08:57:07.919688   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.919695   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:07.919702   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:07.919765   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:07.944058   54452 cri.go:89] found id: ""
	I1206 08:57:07.944072   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.944080   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:07.944085   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:07.944143   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:07.975197   54452 cri.go:89] found id: ""
	I1206 08:57:07.975211   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.975219   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:07.975223   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:07.975286   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:08.003528   54452 cri.go:89] found id: ""
	I1206 08:57:08.003551   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.003559   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:08.003565   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:08.003632   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:08.042231   54452 cri.go:89] found id: ""
	I1206 08:57:08.042244   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.042251   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:08.042264   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:08.042340   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:08.070769   54452 cri.go:89] found id: ""
	I1206 08:57:08.070783   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.070800   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:08.070806   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:08.070863   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:08.095705   54452 cri.go:89] found id: ""
	I1206 08:57:08.095722   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.095729   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:08.095736   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:08.095745   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:08.152794   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:08.152812   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:08.163981   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:08.164009   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:08.231637   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:08.223305   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.223828   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225446   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225934   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.227447   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:08.223305   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.223828   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225446   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225934   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.227447   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:08.231648   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:08.231659   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:08.294693   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:08.294710   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:10.824685   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:10.834735   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:10.834797   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:10.861282   54452 cri.go:89] found id: ""
	I1206 08:57:10.861297   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.861304   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:10.861309   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:10.861380   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:10.889560   54452 cri.go:89] found id: ""
	I1206 08:57:10.889573   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.889580   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:10.889585   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:10.889646   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:10.918582   54452 cri.go:89] found id: ""
	I1206 08:57:10.918597   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.918605   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:10.918611   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:10.918677   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:10.945055   54452 cri.go:89] found id: ""
	I1206 08:57:10.945068   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.945075   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:10.945081   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:10.945142   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:10.971779   54452 cri.go:89] found id: ""
	I1206 08:57:10.971807   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.971814   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:10.971820   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:10.971883   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:11.007014   54452 cri.go:89] found id: ""
	I1206 08:57:11.007028   54452 logs.go:282] 0 containers: []
	W1206 08:57:11.007035   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:11.007041   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:11.007103   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:11.033387   54452 cri.go:89] found id: ""
	I1206 08:57:11.033415   54452 logs.go:282] 0 containers: []
	W1206 08:57:11.033422   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:11.033431   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:11.033441   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:11.103950   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:11.094735   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.095599   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097342   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097718   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.099415   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:11.094735   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.095599   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097342   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097718   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.099415   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:11.103962   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:11.103972   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:11.168820   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:11.168839   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:11.199653   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:11.199669   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:11.258665   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:11.258682   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:13.770048   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:13.780437   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:13.780537   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:13.804490   54452 cri.go:89] found id: ""
	I1206 08:57:13.804504   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.804511   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:13.804517   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:13.804576   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:13.828142   54452 cri.go:89] found id: ""
	I1206 08:57:13.828156   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.828163   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:13.828173   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:13.828234   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:13.852993   54452 cri.go:89] found id: ""
	I1206 08:57:13.853006   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.853013   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:13.853017   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:13.853073   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:13.876970   54452 cri.go:89] found id: ""
	I1206 08:57:13.876983   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.876990   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:13.876996   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:13.877057   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:13.906173   54452 cri.go:89] found id: ""
	I1206 08:57:13.906189   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.906196   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:13.906201   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:13.906260   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:13.932656   54452 cri.go:89] found id: ""
	I1206 08:57:13.932670   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.932677   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:13.932682   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:13.932744   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:13.958494   54452 cri.go:89] found id: ""
	I1206 08:57:13.958507   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.958514   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:13.958522   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:13.958533   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:13.969906   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:13.969925   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:14.055494   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:14.045404   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.046095   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.048372   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.049321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.050244   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:14.045404   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.046095   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.048372   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.049321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.050244   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:14.055511   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:14.055523   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:14.119159   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:14.119179   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:14.151907   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:14.151925   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:16.720554   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:16.731520   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:16.731584   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:16.757438   54452 cri.go:89] found id: ""
	I1206 08:57:16.757452   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.757458   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:16.757463   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:16.757520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:16.782537   54452 cri.go:89] found id: ""
	I1206 08:57:16.782552   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.782559   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:16.782564   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:16.782619   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:16.811967   54452 cri.go:89] found id: ""
	I1206 08:57:16.811981   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.811988   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:16.811993   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:16.812051   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:16.840450   54452 cri.go:89] found id: ""
	I1206 08:57:16.840464   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.840471   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:16.840477   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:16.840553   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:16.865953   54452 cri.go:89] found id: ""
	I1206 08:57:16.865968   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.865975   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:16.865981   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:16.866043   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:16.890520   54452 cri.go:89] found id: ""
	I1206 08:57:16.890540   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.890547   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:16.890552   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:16.890611   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:16.915368   54452 cri.go:89] found id: ""
	I1206 08:57:16.915411   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.915418   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:16.915425   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:16.915435   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:16.975773   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:16.975792   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:16.990535   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:16.990557   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:17.060425   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:17.052130   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.052751   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054271   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054603   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.056244   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:17.052130   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.052751   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054271   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054603   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.056244   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:17.060435   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:17.060446   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:17.124040   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:17.124060   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:19.655902   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:19.666330   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:19.666398   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:19.695219   54452 cri.go:89] found id: ""
	I1206 08:57:19.695232   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.695239   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:19.695245   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:19.695309   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:19.720027   54452 cri.go:89] found id: ""
	I1206 08:57:19.720041   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.720048   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:19.720053   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:19.720112   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:19.745773   54452 cri.go:89] found id: ""
	I1206 08:57:19.745787   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.745794   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:19.745799   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:19.745858   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:19.770885   54452 cri.go:89] found id: ""
	I1206 08:57:19.770898   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.770905   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:19.770910   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:19.770970   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:19.797192   54452 cri.go:89] found id: ""
	I1206 08:57:19.797205   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.797212   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:19.797218   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:19.797278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:19.825222   54452 cri.go:89] found id: ""
	I1206 08:57:19.825236   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.825243   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:19.825248   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:19.825314   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:19.855303   54452 cri.go:89] found id: ""
	I1206 08:57:19.855317   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.855324   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:19.855332   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:19.855342   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:19.912412   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:19.912430   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:19.924673   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:19.924689   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:20.010098   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:19.995577   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998398   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998837   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.003925   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.004952   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:19.995577   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998398   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998837   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.003925   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.004952   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:20.010109   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:20.010121   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:20.081433   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:20.081453   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:22.615286   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:22.625653   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:22.625713   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:22.650708   54452 cri.go:89] found id: ""
	I1206 08:57:22.650721   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.650728   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:22.650734   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:22.650793   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:22.675795   54452 cri.go:89] found id: ""
	I1206 08:57:22.675809   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.675816   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:22.675821   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:22.675876   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:22.700140   54452 cri.go:89] found id: ""
	I1206 08:57:22.700153   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.700160   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:22.700165   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:22.700224   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:22.726855   54452 cri.go:89] found id: ""
	I1206 08:57:22.726869   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.726876   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:22.726882   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:22.726938   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:22.751934   54452 cri.go:89] found id: ""
	I1206 08:57:22.751947   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.751954   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:22.751960   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:22.752017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:22.780047   54452 cri.go:89] found id: ""
	I1206 08:57:22.780061   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.780068   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:22.780074   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:22.780132   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:22.804185   54452 cri.go:89] found id: ""
	I1206 08:57:22.804199   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.804206   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:22.804214   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:22.804230   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:22.814840   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:22.814855   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:22.881877   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:22.873545   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.874258   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.875884   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.876440   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.878086   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:22.873545   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.874258   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.875884   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.876440   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.878086   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:22.881887   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:22.881897   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:22.949826   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:22.949846   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:22.990802   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:22.990820   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:25.557401   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:25.567869   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:25.567931   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:25.593044   54452 cri.go:89] found id: ""
	I1206 08:57:25.593058   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.593065   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:25.593070   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:25.593131   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:25.621119   54452 cri.go:89] found id: ""
	I1206 08:57:25.621134   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.621141   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:25.621146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:25.621206   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:25.649977   54452 cri.go:89] found id: ""
	I1206 08:57:25.649991   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.649998   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:25.650003   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:25.650066   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:25.674573   54452 cri.go:89] found id: ""
	I1206 08:57:25.674586   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.674593   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:25.674598   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:25.674654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:25.700412   54452 cri.go:89] found id: ""
	I1206 08:57:25.700425   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.700432   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:25.700438   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:25.700501   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:25.726656   54452 cri.go:89] found id: ""
	I1206 08:57:25.726670   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.726686   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:25.726691   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:25.726760   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:25.751625   54452 cri.go:89] found id: ""
	I1206 08:57:25.751639   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.751646   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:25.751653   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:25.751664   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:25.812914   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:25.804895   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.805687   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807191   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807672   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.809146   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:25.804895   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.805687   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807191   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807672   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.809146   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:25.812924   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:25.812936   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:25.875880   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:25.875898   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:25.905301   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:25.905316   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:25.964301   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:25.964320   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:28.477584   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:28.487626   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:28.487685   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:28.516024   54452 cri.go:89] found id: ""
	I1206 08:57:28.516038   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.516045   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:28.516050   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:28.516109   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:28.542151   54452 cri.go:89] found id: ""
	I1206 08:57:28.542165   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.542172   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:28.542177   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:28.542234   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:28.569963   54452 cri.go:89] found id: ""
	I1206 08:57:28.569977   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.569984   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:28.569989   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:28.570047   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:28.594336   54452 cri.go:89] found id: ""
	I1206 08:57:28.594350   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.594357   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:28.594362   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:28.594421   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:28.620834   54452 cri.go:89] found id: ""
	I1206 08:57:28.620846   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.620854   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:28.620859   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:28.620916   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:28.645672   54452 cri.go:89] found id: ""
	I1206 08:57:28.645686   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.645693   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:28.645698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:28.645762   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:28.670982   54452 cri.go:89] found id: ""
	I1206 08:57:28.670997   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.671004   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:28.671011   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:28.671022   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:28.729216   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:28.729234   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:28.741378   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:28.741394   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:28.808285   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:28.799664   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.800557   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802319   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802654   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.804202   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:28.799664   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.800557   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802319   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802654   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.804202   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:28.808296   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:28.808308   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:28.872187   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:28.872205   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:31.410802   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:31.421507   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:31.421567   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:31.449192   54452 cri.go:89] found id: ""
	I1206 08:57:31.449206   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.449213   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:31.449219   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:31.449278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:31.479043   54452 cri.go:89] found id: ""
	I1206 08:57:31.479057   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.479070   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:31.479075   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:31.479138   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:31.504010   54452 cri.go:89] found id: ""
	I1206 08:57:31.504024   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.504031   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:31.504036   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:31.504094   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:31.529789   54452 cri.go:89] found id: ""
	I1206 08:57:31.529807   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.529818   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:31.529824   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:31.529890   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:31.555332   54452 cri.go:89] found id: ""
	I1206 08:57:31.555346   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.555354   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:31.555359   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:31.555449   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:31.585896   54452 cri.go:89] found id: ""
	I1206 08:57:31.585909   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.585916   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:31.585922   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:31.585980   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:31.610938   54452 cri.go:89] found id: ""
	I1206 08:57:31.610950   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.610958   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:31.610965   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:31.610975   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:31.667535   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:31.667553   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:31.680211   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:31.680234   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:31.750810   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:31.742704   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.743477   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745233   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745766   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.746756   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:31.742704   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.743477   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745233   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745766   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.746756   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:31.750821   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:31.750833   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:31.813960   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:31.813983   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:34.341858   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:34.352097   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:34.352170   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:34.379126   54452 cri.go:89] found id: ""
	I1206 08:57:34.379140   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.379148   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:34.379153   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:34.379211   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:34.404136   54452 cri.go:89] found id: ""
	I1206 08:57:34.404150   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.404158   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:34.404163   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:34.404222   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:34.429318   54452 cri.go:89] found id: ""
	I1206 08:57:34.429333   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.429340   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:34.429346   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:34.429410   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:34.454607   54452 cri.go:89] found id: ""
	I1206 08:57:34.454621   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.454628   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:34.454633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:34.454689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:34.481702   54452 cri.go:89] found id: ""
	I1206 08:57:34.481715   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.481722   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:34.481727   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:34.481786   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:34.506222   54452 cri.go:89] found id: ""
	I1206 08:57:34.506236   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.506242   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:34.506247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:34.506307   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:34.531791   54452 cri.go:89] found id: ""
	I1206 08:57:34.531804   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.531811   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:34.531818   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:34.531829   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:34.542352   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:34.542368   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:34.605646   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:34.597261   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.597943   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.599605   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.600148   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.601815   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:34.597261   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.597943   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.599605   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.600148   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.601815   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:34.605655   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:34.605666   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:34.668800   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:34.668818   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:34.703806   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:34.703822   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:37.265019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:37.275013   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:37.275073   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:37.300683   54452 cri.go:89] found id: ""
	I1206 08:57:37.300696   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.300704   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:37.300710   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:37.300768   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:37.326083   54452 cri.go:89] found id: ""
	I1206 08:57:37.326096   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.326103   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:37.326109   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:37.326169   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:37.354381   54452 cri.go:89] found id: ""
	I1206 08:57:37.354395   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.354402   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:37.354407   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:37.354467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:37.379048   54452 cri.go:89] found id: ""
	I1206 08:57:37.379062   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.379069   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:37.379074   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:37.379132   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:37.407083   54452 cri.go:89] found id: ""
	I1206 08:57:37.407097   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.407104   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:37.407120   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:37.407179   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:37.430756   54452 cri.go:89] found id: ""
	I1206 08:57:37.430769   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.430777   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:37.430782   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:37.430839   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:37.459469   54452 cri.go:89] found id: ""
	I1206 08:57:37.459483   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.459490   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:37.459498   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:37.459510   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:37.470844   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:37.470860   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:37.538783   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:37.530038   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.530744   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.532506   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.533299   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.534867   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:37.530038   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.530744   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.532506   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.533299   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.534867   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:37.538793   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:37.538804   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:37.604935   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:37.604954   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:37.637474   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:37.637491   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:40.195736   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:40.205728   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:40.205790   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:40.242821   54452 cri.go:89] found id: ""
	I1206 08:57:40.242834   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.242841   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:40.242847   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:40.242902   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:40.284606   54452 cri.go:89] found id: ""
	I1206 08:57:40.284620   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.284628   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:40.284633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:40.284689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:40.317256   54452 cri.go:89] found id: ""
	I1206 08:57:40.317270   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.317277   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:40.317282   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:40.317339   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:40.341890   54452 cri.go:89] found id: ""
	I1206 08:57:40.341904   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.341911   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:40.341916   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:40.341971   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:40.365889   54452 cri.go:89] found id: ""
	I1206 08:57:40.365902   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.365909   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:40.365915   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:40.365970   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:40.390366   54452 cri.go:89] found id: ""
	I1206 08:57:40.390379   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.390386   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:40.390393   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:40.390451   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:40.414154   54452 cri.go:89] found id: ""
	I1206 08:57:40.414168   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.414174   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:40.414182   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:40.414192   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:40.425672   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:40.425688   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:40.491793   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:40.479914   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.480484   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485346   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485909   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.487745   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:40.479914   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.480484   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485346   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485909   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.487745   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:40.491804   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:40.491815   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:40.554734   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:40.554754   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:40.585496   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:40.585511   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:43.142927   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:43.152875   54452 kubeadm.go:602] duration metric: took 4m4.203206664s to restartPrimaryControlPlane
	W1206 08:57:43.152943   54452 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 08:57:43.153014   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 08:57:43.558005   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:57:43.571431   54452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:57:43.579298   54452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 08:57:43.579354   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:57:43.587284   54452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 08:57:43.587293   54452 kubeadm.go:158] found existing configuration files:
	
	I1206 08:57:43.587347   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:57:43.595209   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 08:57:43.595263   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 08:57:43.602677   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:57:43.610821   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 08:57:43.610884   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:57:43.618219   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:57:43.625867   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 08:57:43.625922   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:57:43.633373   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:57:43.640818   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 08:57:43.640880   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:57:43.648275   54452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 08:57:43.690498   54452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 08:57:43.690790   54452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 08:57:43.763599   54452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 08:57:43.763663   54452 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 08:57:43.763697   54452 kubeadm.go:319] OS: Linux
	I1206 08:57:43.763740   54452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 08:57:43.763787   54452 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 08:57:43.763833   54452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 08:57:43.763880   54452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 08:57:43.763928   54452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 08:57:43.763975   54452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 08:57:43.764019   54452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 08:57:43.764066   54452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 08:57:43.764112   54452 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 08:57:43.838707   54452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 08:57:43.838810   54452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 08:57:43.838899   54452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 08:57:43.843797   54452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 08:57:43.849166   54452 out.go:252]   - Generating certificates and keys ...
	I1206 08:57:43.849248   54452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 08:57:43.849312   54452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 08:57:43.849386   54452 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 08:57:43.849451   54452 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 08:57:43.849520   54452 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 08:57:43.849572   54452 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 08:57:43.849633   54452 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 08:57:43.849693   54452 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 08:57:43.849766   54452 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 08:57:43.849838   54452 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 08:57:43.849874   54452 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 08:57:43.849928   54452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 08:57:44.005203   54452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 08:57:44.248156   54452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 08:57:44.506601   54452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 08:57:44.747606   54452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 08:57:44.875144   54452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 08:57:44.875922   54452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 08:57:44.878561   54452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 08:57:44.881876   54452 out.go:252]   - Booting up control plane ...
	I1206 08:57:44.881976   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 08:57:44.882052   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 08:57:44.882117   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 08:57:44.902770   54452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 08:57:44.902884   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 08:57:44.910887   54452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 08:57:44.915557   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 08:57:44.915618   54452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 08:57:45.072565   54452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 08:57:45.072679   54452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:01:45.073201   54452 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00139193s
	I1206 09:01:45.073230   54452 kubeadm.go:319] 
	I1206 09:01:45.073292   54452 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:01:45.073325   54452 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:01:45.073460   54452 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:01:45.073475   54452 kubeadm.go:319] 
	I1206 09:01:45.073605   54452 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:01:45.073641   54452 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:01:45.073671   54452 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:01:45.073674   54452 kubeadm.go:319] 
	I1206 09:01:45.079541   54452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:01:45.080019   54452 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:01:45.080137   54452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:01:45.080372   54452 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 09:01:45.080377   54452 kubeadm.go:319] 
	W1206 09:01:45.080611   54452 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00139193s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:01:45.080716   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:01:45.081059   54452 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:01:45.527784   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:01:45.541714   54452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:01:45.541768   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:01:45.549724   54452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:01:45.549735   54452 kubeadm.go:158] found existing configuration files:
	
	I1206 09:01:45.549787   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 09:01:45.557657   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:01:45.557710   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:01:45.565116   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 09:01:45.572963   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:01:45.573017   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:01:45.580604   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 09:01:45.588212   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:01:45.588267   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:01:45.595779   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 09:01:45.604082   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:01:45.604137   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:01:45.612084   54452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:01:45.650374   54452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:01:45.650428   54452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:01:45.720642   54452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:01:45.720706   54452 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:01:45.720740   54452 kubeadm.go:319] OS: Linux
	I1206 09:01:45.720783   54452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:01:45.720831   54452 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:01:45.720876   54452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:01:45.720923   54452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:01:45.720970   54452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:01:45.721017   54452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:01:45.721061   54452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:01:45.721107   54452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:01:45.721153   54452 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:01:45.786361   54452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:01:45.786476   54452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:01:45.786571   54452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:01:45.791901   54452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:01:45.795433   54452 out.go:252]   - Generating certificates and keys ...
	I1206 09:01:45.795514   54452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:01:45.795578   54452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:01:45.795654   54452 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 09:01:45.795714   54452 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 09:01:45.795783   54452 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 09:01:45.795835   54452 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 09:01:45.795898   54452 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 09:01:45.795958   54452 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 09:01:45.796032   54452 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 09:01:45.796104   54452 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 09:01:45.796185   54452 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 09:01:45.796240   54452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:01:45.935718   54452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:01:46.055895   54452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:01:46.294260   54452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:01:46.619812   54452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:01:46.778456   54452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:01:46.779211   54452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:01:46.782067   54452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:01:46.785434   54452 out.go:252]   - Booting up control plane ...
	I1206 09:01:46.785536   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:01:46.785617   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:01:46.785688   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:01:46.805726   54452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:01:46.805831   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:01:46.814430   54452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:01:46.816546   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:01:46.816591   54452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:01:46.952811   54452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:01:46.952924   54452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:05:46.951725   54452 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00022284s
	I1206 09:05:46.951748   54452 kubeadm.go:319] 
	I1206 09:05:46.951804   54452 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:05:46.951836   54452 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:05:46.951939   54452 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:05:46.951944   54452 kubeadm.go:319] 
	I1206 09:05:46.952047   54452 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:05:46.952078   54452 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:05:46.952108   54452 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:05:46.952111   54452 kubeadm.go:319] 
	I1206 09:05:46.956655   54452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:05:46.957065   54452 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:05:46.957172   54452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:05:46.957405   54452 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 09:05:46.957409   54452 kubeadm.go:319] 
	I1206 09:05:46.957479   54452 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:05:46.957537   54452 kubeadm.go:403] duration metric: took 12m8.043807841s to StartCluster
	I1206 09:05:46.957567   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:05:46.957632   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:05:47.005263   54452 cri.go:89] found id: ""
	I1206 09:05:47.005276   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.005284   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 09:05:47.005289   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:05:47.005348   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:05:47.039824   54452 cri.go:89] found id: ""
	I1206 09:05:47.039837   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.039844   54452 logs.go:284] No container was found matching "etcd"
	I1206 09:05:47.039849   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:05:47.039907   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:05:47.069199   54452 cri.go:89] found id: ""
	I1206 09:05:47.069215   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.069222   54452 logs.go:284] No container was found matching "coredns"
	I1206 09:05:47.069228   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:05:47.069290   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:05:47.094120   54452 cri.go:89] found id: ""
	I1206 09:05:47.094134   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.094141   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 09:05:47.094146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:05:47.094204   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:05:47.117873   54452 cri.go:89] found id: ""
	I1206 09:05:47.117887   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.117895   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:05:47.117900   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:05:47.117957   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:05:47.141782   54452 cri.go:89] found id: ""
	I1206 09:05:47.141796   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.141803   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 09:05:47.141809   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:05:47.141869   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:05:47.167265   54452 cri.go:89] found id: ""
	I1206 09:05:47.167280   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.167287   54452 logs.go:284] No container was found matching "kindnet"
	I1206 09:05:47.167295   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 09:05:47.167314   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:05:47.224071   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 09:05:47.224090   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:05:47.235798   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:05:47.235814   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:05:47.303156   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:05:47.295299   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.295956   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297451   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297881   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.299336   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 09:05:47.295299   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.295956   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297451   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297881   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.299336   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:05:47.303181   54452 logs.go:123] Gathering logs for containerd ...
	I1206 09:05:47.303191   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:05:47.366843   54452 logs.go:123] Gathering logs for container status ...
	I1206 09:05:47.366863   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 09:05:47.396270   54452 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 09:05:47.396302   54452 out.go:285] * 
	W1206 09:05:47.396359   54452 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:05:47.396374   54452 out.go:285] * 
	W1206 09:05:47.398505   54452 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:05:47.405628   54452 out.go:203] 
	W1206 09:05:47.408588   54452 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:05:47.408634   54452 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 09:05:47.408679   54452 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 09:05:47.411976   54452 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948296356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948312964Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948379313Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948412085Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948441403Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948462491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948482866Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948510698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948529111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948562713Z" level=info msg="Connect containerd service"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948903673Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.949608593Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967402561Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967484402Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967550135Z" level=info msg="Start subscribing containerd event"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967692902Z" level=info msg="Start recovering state"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019042107Z" level=info msg="Start event monitor"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019110196Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019122955Z" level=info msg="Start streaming server"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019132310Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019140786Z" level=info msg="runtime interface starting up..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019147531Z" level=info msg="starting plugins..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019160085Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 08:53:37 functional-090986 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.020711198Z" level=info msg="containerd successfully booted in 0.094795s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:05:48.657928   21085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:48.658672   21085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:48.660309   21085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:48.660759   21085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:48.662287   21085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 09:05:48 up 48 min,  0 user,  load average: 0.06, 0.18, 0.35
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 09:05:45 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:05:46 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 06 09:05:46 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:46 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:46 functional-090986 kubelet[20894]: E1206 09:05:46.266525   20894 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:05:46 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:05:46 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:05:46 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 09:05:46 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:46 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:47 functional-090986 kubelet[20905]: E1206 09:05:47.025530   20905 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:05:47 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:05:47 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:05:47 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 09:05:47 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:47 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:47 functional-090986 kubelet[20995]: E1206 09:05:47.750366   20995 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:05:47 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:05:47 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:05:48 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 09:05:48 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:48 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:48 functional-090986 kubelet[21056]: E1206 09:05:48.526350   21056 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:05:48 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:05:48 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (386.349201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-090986 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-090986 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (61.16773ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-090986 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (327.103425ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-181746 image ls --format json --alsologtostderr                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls --format table --alsologtostderr                                                                                             │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ update-context │ functional-181746 update-context --alsologtostderr -v=2                                                                                                 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ image          │ functional-181746 image ls                                                                                                                              │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ delete         │ -p functional-181746                                                                                                                                    │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
	│ start          │ -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │                     │
	│ start          │ -p functional-090986 --alsologtostderr -v=8                                                                                                             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:47 UTC │                     │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add registry.k8s.io/pause:latest                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache add minikube-local-cache-test:functional-090986                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ functional-090986 cache delete minikube-local-cache-test:functional-090986                                                                              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl images                                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	│ cache          │ functional-090986 cache reload                                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh            │ functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ kubectl        │ functional-090986 kubectl -- --context functional-090986 get pods                                                                                       │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	│ start          │ -p functional-090986 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:53:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:53:33.876279   54452 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:53:33.876426   54452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:53:33.876430   54452 out.go:374] Setting ErrFile to fd 2...
	I1206 08:53:33.876434   54452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:53:33.876677   54452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:53:33.877013   54452 out.go:368] Setting JSON to false
	I1206 08:53:33.877825   54452 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2165,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:53:33.877882   54452 start.go:143] virtualization:  
	I1206 08:53:33.881239   54452 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:53:33.885112   54452 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:53:33.885177   54452 notify.go:221] Checking for updates...
	I1206 08:53:33.891576   54452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:53:33.894372   54452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:53:33.897142   54452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:53:33.900076   54452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:53:33.902894   54452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:53:33.906249   54452 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:53:33.906348   54452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:53:33.928682   54452 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:53:33.928770   54452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:53:33.993741   54452 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 08:53:33.983085793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:53:33.993843   54452 docker.go:319] overlay module found
	I1206 08:53:33.999105   54452 out.go:179] * Using the docker driver based on existing profile
	I1206 08:53:34.002148   54452 start.go:309] selected driver: docker
	I1206 08:53:34.002159   54452 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:34.002241   54452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:53:34.002360   54452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:53:34.059754   54452 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 08:53:34.048620994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:53:34.060212   54452 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:53:34.060235   54452 cni.go:84] Creating CNI manager for ""
	I1206 08:53:34.060282   54452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:53:34.060330   54452 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:34.065569   54452 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:53:34.068398   54452 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:53:34.071322   54452 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:53:34.074275   54452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:53:34.074316   54452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:53:34.074325   54452 cache.go:65] Caching tarball of preloaded images
	I1206 08:53:34.074364   54452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:53:34.074457   54452 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:53:34.074467   54452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:53:34.074577   54452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:53:34.094292   54452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:53:34.094303   54452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:53:34.094322   54452 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:53:34.094352   54452 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:53:34.094428   54452 start.go:364] duration metric: took 60.843µs to acquireMachinesLock for "functional-090986"
	I1206 08:53:34.094446   54452 start.go:96] Skipping create...Using existing machine configuration
	I1206 08:53:34.094451   54452 fix.go:54] fixHost starting: 
	I1206 08:53:34.094714   54452 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:53:34.110952   54452 fix.go:112] recreateIfNeeded on functional-090986: state=Running err=<nil>
	W1206 08:53:34.110973   54452 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 08:53:34.114350   54452 out.go:252] * Updating the running docker "functional-090986" container ...
	I1206 08:53:34.114380   54452 machine.go:94] provisionDockerMachine start ...
	I1206 08:53:34.114470   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.132110   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.132436   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.132441   54452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:53:34.290732   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:53:34.290745   54452 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:53:34.290806   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.309786   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.310075   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.310083   54452 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:53:34.468771   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:53:34.468838   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.492421   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.492726   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.492743   54452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:53:34.643743   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:53:34.643757   54452 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:53:34.643785   54452 ubuntu.go:190] setting up certificates
	I1206 08:53:34.643793   54452 provision.go:84] configureAuth start
	I1206 08:53:34.643849   54452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:53:34.661031   54452 provision.go:143] copyHostCerts
	I1206 08:53:34.661090   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:53:34.661103   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:53:34.661173   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:53:34.661279   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:53:34.661283   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:53:34.661307   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:53:34.661364   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:53:34.661367   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:53:34.661387   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:53:34.661440   54452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:53:35.261601   54452 provision.go:177] copyRemoteCerts
	I1206 08:53:35.261659   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:53:35.261707   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.278502   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.383098   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:53:35.400343   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:53:35.417458   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 08:53:35.434271   54452 provision.go:87] duration metric: took 790.45575ms to configureAuth
	I1206 08:53:35.434289   54452 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:53:35.434485   54452 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:53:35.434491   54452 machine.go:97] duration metric: took 1.320106202s to provisionDockerMachine
	I1206 08:53:35.434498   54452 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:53:35.434507   54452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:53:35.434552   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:53:35.434601   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.452073   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.559110   54452 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:53:35.562282   54452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:53:35.562301   54452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:53:35.562313   54452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:53:35.562372   54452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:53:35.562453   54452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:53:35.562529   54452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:53:35.562578   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:53:35.569704   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:53:35.586692   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:53:35.603733   54452 start.go:296] duration metric: took 169.221467ms for postStartSetup
	I1206 08:53:35.603809   54452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:53:35.603847   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.620625   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.725607   54452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:53:35.730716   54452 fix.go:56] duration metric: took 1.636258463s for fixHost
	I1206 08:53:35.730732   54452 start.go:83] releasing machines lock for "functional-090986", held for 1.636296668s
	I1206 08:53:35.730797   54452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:53:35.748170   54452 ssh_runner.go:195] Run: cat /version.json
	I1206 08:53:35.748211   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.748450   54452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:53:35.748491   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.780618   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.788438   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.895097   54452 ssh_runner.go:195] Run: systemctl --version
	I1206 08:53:35.994868   54452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 08:53:36.000428   54452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:53:36.000495   54452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:53:36.008950   54452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 08:53:36.008964   54452 start.go:496] detecting cgroup driver to use...
	I1206 08:53:36.008997   54452 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:53:36.009046   54452 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:53:36.024586   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:53:36.037573   54452 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:53:36.037628   54452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:53:36.053442   54452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:53:36.066493   54452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:53:36.187062   54452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:53:36.308311   54452 docker.go:234] disabling docker service ...
	I1206 08:53:36.308366   54452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:53:36.324390   54452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:53:36.337942   54452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:53:36.464363   54452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:53:36.601173   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:53:36.614787   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:53:36.630199   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:53:36.639943   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:53:36.649262   54452 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:53:36.649336   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:53:36.657952   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:53:36.666666   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:53:36.675637   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:53:36.684412   54452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:53:36.692740   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:53:36.701838   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:53:36.712344   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:53:36.721508   54452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:53:36.729269   54452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:53:36.736851   54452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:53:36.864978   54452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:53:37.021054   54452 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:53:37.021112   54452 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:53:37.025377   54452 start.go:564] Will wait 60s for crictl version
	I1206 08:53:37.025433   54452 ssh_runner.go:195] Run: which crictl
	I1206 08:53:37.029231   54452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:53:37.053402   54452 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:53:37.053462   54452 ssh_runner.go:195] Run: containerd --version
	I1206 08:53:37.077672   54452 ssh_runner.go:195] Run: containerd --version
	I1206 08:53:37.104087   54452 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:53:37.107051   54452 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:53:37.126470   54452 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:53:37.133471   54452 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 08:53:37.136362   54452 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:53:37.136495   54452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:53:37.136575   54452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:53:37.161065   54452 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:53:37.161078   54452 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:53:37.161139   54452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:53:37.189850   54452 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:53:37.189861   54452 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:53:37.189866   54452 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:53:37.189968   54452 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:53:37.190042   54452 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:53:37.215125   54452 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 08:53:37.215146   54452 cni.go:84] Creating CNI manager for ""
	I1206 08:53:37.215156   54452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:53:37.215169   54452 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:53:37.215191   54452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:53:37.215303   54452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:53:37.215394   54452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:53:37.223611   54452 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:53:37.223674   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:53:37.231742   54452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:53:37.245618   54452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:53:37.258873   54452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1206 08:53:37.272656   54452 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:53:37.277122   54452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:53:37.404546   54452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:53:38.220934   54452 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:53:38.220945   54452 certs.go:195] generating shared ca certs ...
	I1206 08:53:38.220959   54452 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:53:38.221099   54452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:53:38.221148   54452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:53:38.221154   54452 certs.go:257] generating profile certs ...
	I1206 08:53:38.221235   54452 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:53:38.221287   54452 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:53:38.221325   54452 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:53:38.221433   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:53:38.221466   54452 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:53:38.221473   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:53:38.221504   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:53:38.221527   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:53:38.221551   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:53:38.221601   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:53:38.222193   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:53:38.247995   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:53:38.268014   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:53:38.289184   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:53:38.308825   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:53:38.326629   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:53:38.344198   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:53:38.361819   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:53:38.379442   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:53:38.397025   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:53:38.414583   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:53:38.432182   54452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:53:38.444938   54452 ssh_runner.go:195] Run: openssl version
	I1206 08:53:38.451220   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.458796   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:53:38.466335   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.470195   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.470251   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.511660   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:53:38.520107   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.527562   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:53:38.535252   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.539202   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.539257   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.580913   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:53:38.589267   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.596722   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:53:38.604956   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.609011   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.609077   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.654662   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:53:38.662094   54452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:53:38.666110   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 08:53:38.707066   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 08:53:38.748028   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 08:53:38.790291   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 08:53:38.831326   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 08:53:38.872506   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 08:53:38.913738   54452 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:38.913828   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:53:38.913894   54452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:53:38.941817   54452 cri.go:89] found id: ""
	I1206 08:53:38.941888   54452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:53:38.949650   54452 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 08:53:38.949660   54452 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 08:53:38.949712   54452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 08:53:38.957046   54452 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:38.957552   54452 kubeconfig.go:125] found "functional-090986" server: "https://192.168.49.2:8441"
	I1206 08:53:38.960001   54452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 08:53:38.973807   54452 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 08:39:02.953222088 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 08:53:37.265532344 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 08:53:38.973835   54452 kubeadm.go:1161] stopping kube-system containers ...
	I1206 08:53:38.973855   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 08:53:38.973990   54452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:53:39.006630   54452 cri.go:89] found id: ""
	I1206 08:53:39.006691   54452 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 08:53:39.027188   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:53:39.035115   54452 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  6 08:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 08:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 08:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  6 08:43 /etc/kubernetes/scheduler.conf
	
	I1206 08:53:39.035195   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:53:39.043346   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:53:39.051128   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.051184   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:53:39.058808   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:53:39.066431   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.066486   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:53:39.074261   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:53:39.082004   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.082060   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:53:39.089693   54452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:53:39.097973   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:39.144114   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.034967   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.247090   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.303335   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.358218   54452 api_server.go:52] waiting for apiserver process to appear ...
	I1206 08:53:40.358284   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:40.858753   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:41.358700   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:41.858760   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:42.359143   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:42.859214   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:43.358859   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:43.858475   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:44.358512   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:44.859201   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:45.358789   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:45.858829   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:46.358595   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:46.858465   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:47.358809   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:47.858516   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:48.358367   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:48.859203   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:49.359207   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:49.858491   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:50.359361   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:50.859136   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:51.358696   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:51.858427   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:52.358504   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:52.858356   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:53.359243   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:53.859142   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:54.359242   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:54.859316   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:55.359059   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:55.858609   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:56.359350   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:56.859078   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:57.359214   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:57.859097   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:58.359174   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:58.858946   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:59.358533   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:59.859078   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:00.358576   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:00.859407   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:01.358874   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:01.858512   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:02.358441   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:02.858517   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:03.359363   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:03.859400   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:04.359276   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:04.859156   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:05.358974   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:05.858357   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:06.359182   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:06.859168   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:07.359160   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:07.859209   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:08.359310   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:08.859102   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:09.358600   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:09.859219   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:10.359034   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:10.858816   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:11.358429   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:11.858433   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:12.359162   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:12.859196   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:13.358899   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:13.858468   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:14.359028   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:14.858481   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:15.359221   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:15.858792   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:16.358493   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:16.859448   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:17.359360   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:17.859153   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:18.358389   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:18.859216   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:19.359289   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:19.858488   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:20.359257   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:20.859245   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:21.359184   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:21.859040   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:22.358496   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:22.859325   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:23.358553   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:23.858649   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:24.358999   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:24.858487   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:25.359321   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:25.859061   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:26.358793   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:26.858844   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:27.358536   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:27.859274   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:28.359019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:28.858738   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:29.359019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:29.858548   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:30.358369   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:30.859081   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:31.359088   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:31.858895   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:32.359444   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:32.859328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:33.359199   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:33.858413   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:34.358493   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:34.858487   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:35.359338   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:35.858497   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:36.358475   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:36.858480   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:37.359209   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:37.858485   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:38.359088   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:38.858716   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:39.358992   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:39.859022   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:40.358688   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:40.358791   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:40.388106   54452 cri.go:89] found id: ""
	I1206 08:54:40.388120   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.388134   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:40.388140   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:40.388201   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:40.412432   54452 cri.go:89] found id: ""
	I1206 08:54:40.412446   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.412453   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:40.412458   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:40.412515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:40.436247   54452 cri.go:89] found id: ""
	I1206 08:54:40.436261   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.436268   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:40.436274   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:40.436334   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:40.461648   54452 cri.go:89] found id: ""
	I1206 08:54:40.461662   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.461669   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:40.461674   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:40.461731   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:40.490826   54452 cri.go:89] found id: ""
	I1206 08:54:40.490840   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.490846   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:40.490851   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:40.490912   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:40.517246   54452 cri.go:89] found id: ""
	I1206 08:54:40.517259   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.517266   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:40.517272   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:40.517331   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:40.542129   54452 cri.go:89] found id: ""
	I1206 08:54:40.542144   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.542150   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:40.542157   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:40.542167   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:40.599816   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:40.599836   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:40.610692   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:40.610709   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:40.681214   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:40.671721   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673072   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673914   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.675628   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.676278   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:40.671721   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673072   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673914   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.675628   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.676278   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:40.681229   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:40.681240   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:40.746611   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:40.746631   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:43.275588   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:43.286822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:43.286894   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:43.313760   54452 cri.go:89] found id: ""
	I1206 08:54:43.313779   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.313786   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:43.313793   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:43.313852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:43.338174   54452 cri.go:89] found id: ""
	I1206 08:54:43.338188   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.338203   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:43.338208   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:43.338278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:43.362249   54452 cri.go:89] found id: ""
	I1206 08:54:43.362263   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.362270   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:43.362275   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:43.362333   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:43.386332   54452 cri.go:89] found id: ""
	I1206 08:54:43.386345   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.386353   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:43.386358   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:43.386413   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:43.413265   54452 cri.go:89] found id: ""
	I1206 08:54:43.413278   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.413285   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:43.413290   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:43.413346   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:43.437411   54452 cri.go:89] found id: ""
	I1206 08:54:43.437424   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.437431   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:43.437436   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:43.437497   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:43.463006   54452 cri.go:89] found id: ""
	I1206 08:54:43.463019   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.463046   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:43.463054   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:43.463065   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:43.531909   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:43.523361   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.524077   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525611   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525984   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.527554   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:43.523361   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.524077   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525611   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525984   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.527554   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:43.531920   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:43.531930   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:43.596428   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:43.596447   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:43.625653   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:43.625669   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:43.685656   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:43.685675   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:46.197048   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:46.207403   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:46.207468   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:46.259332   54452 cri.go:89] found id: ""
	I1206 08:54:46.259345   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.259361   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:46.259367   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:46.259453   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:46.293591   54452 cri.go:89] found id: ""
	I1206 08:54:46.293604   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.293611   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:46.293616   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:46.293674   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:46.324320   54452 cri.go:89] found id: ""
	I1206 08:54:46.324333   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.324340   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:46.324345   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:46.324403   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:46.349505   54452 cri.go:89] found id: ""
	I1206 08:54:46.349519   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.349526   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:46.349531   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:46.349592   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:46.372944   54452 cri.go:89] found id: ""
	I1206 08:54:46.372958   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.372965   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:46.372970   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:46.373028   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:46.397863   54452 cri.go:89] found id: ""
	I1206 08:54:46.397876   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.397884   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:46.397889   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:46.397947   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:46.423405   54452 cri.go:89] found id: ""
	I1206 08:54:46.423419   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.423426   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:46.423434   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:46.423444   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:46.479557   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:46.479577   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:46.490975   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:46.490992   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:46.555476   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:46.546289   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.547116   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.548919   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.549655   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.551369   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:46.546289   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.547116   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.548919   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.549655   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.551369   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:46.555486   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:46.555499   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:46.617650   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:46.617666   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:49.145146   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:49.156935   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:49.157011   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:49.181313   54452 cri.go:89] found id: ""
	I1206 08:54:49.181327   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.181334   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:49.181339   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:49.181396   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:49.205770   54452 cri.go:89] found id: ""
	I1206 08:54:49.205783   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.205792   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:49.205797   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:49.205854   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:49.246208   54452 cri.go:89] found id: ""
	I1206 08:54:49.246232   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.246240   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:49.246245   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:49.246312   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:49.276707   54452 cri.go:89] found id: ""
	I1206 08:54:49.276720   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.276739   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:49.276744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:49.276817   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:49.304665   54452 cri.go:89] found id: ""
	I1206 08:54:49.304684   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.304691   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:49.304696   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:49.304754   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:49.329874   54452 cri.go:89] found id: ""
	I1206 08:54:49.329888   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.329895   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:49.329901   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:49.329967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:49.355459   54452 cri.go:89] found id: ""
	I1206 08:54:49.355473   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.355480   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:49.355487   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:49.355503   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:49.383334   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:49.383349   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:49.438134   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:49.438151   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:49.449298   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:49.449313   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:49.517360   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:49.507622   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.508394   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510126   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510650   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.512155   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:49.507622   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.508394   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510126   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510650   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.512155   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:49.517370   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:49.517380   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:52.080828   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:52.091103   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:52.091181   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:52.116535   54452 cri.go:89] found id: ""
	I1206 08:54:52.116549   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.116556   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:52.116570   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:52.116633   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:52.142398   54452 cri.go:89] found id: ""
	I1206 08:54:52.142412   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.142424   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:52.142429   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:52.142485   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:52.169937   54452 cri.go:89] found id: ""
	I1206 08:54:52.169951   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.169958   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:52.169963   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:52.170020   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:52.200818   54452 cri.go:89] found id: ""
	I1206 08:54:52.200832   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.200838   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:52.200843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:52.200899   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:52.228819   54452 cri.go:89] found id: ""
	I1206 08:54:52.228833   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.228841   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:52.228846   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:52.228908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:52.258951   54452 cri.go:89] found id: ""
	I1206 08:54:52.258964   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.258972   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:52.258977   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:52.259042   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:52.294986   54452 cri.go:89] found id: ""
	I1206 08:54:52.295000   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.295007   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:52.295015   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:52.295025   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:52.362225   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:52.362245   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:52.389713   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:52.389729   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:52.445119   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:52.445137   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:52.458958   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:52.458980   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:52.523486   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:52.514851   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.515698   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517261   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517893   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.519458   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:52.514851   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.515698   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517261   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517893   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.519458   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:55.023766   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:55.034751   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:55.034820   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:55.060938   54452 cri.go:89] found id: ""
	I1206 08:54:55.060952   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.060960   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:55.060965   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:55.061025   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:55.086352   54452 cri.go:89] found id: ""
	I1206 08:54:55.086365   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.086383   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:55.086389   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:55.086457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:55.111318   54452 cri.go:89] found id: ""
	I1206 08:54:55.111334   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.111341   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:55.111346   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:55.111427   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:55.140103   54452 cri.go:89] found id: ""
	I1206 08:54:55.140118   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.140125   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:55.140130   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:55.140194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:55.164478   54452 cri.go:89] found id: ""
	I1206 08:54:55.164492   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.164500   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:55.164505   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:55.164565   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:55.191182   54452 cri.go:89] found id: ""
	I1206 08:54:55.191195   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.191203   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:55.191209   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:55.191266   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:55.216083   54452 cri.go:89] found id: ""
	I1206 08:54:55.216097   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.216104   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:55.216111   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:55.216122   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:55.303982   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:55.294944   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.295756   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.297492   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.298117   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.299945   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:55.294944   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.295756   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.297492   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.298117   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.299945   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:55.303992   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:55.304003   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:55.365857   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:55.365875   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:55.393911   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:55.393928   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:55.455110   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:55.455129   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:57.967188   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:57.977408   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:57.977467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:58.003574   54452 cri.go:89] found id: ""
	I1206 08:54:58.003588   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.003596   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:58.003601   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:58.003662   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:58.029323   54452 cri.go:89] found id: ""
	I1206 08:54:58.029337   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.029344   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:58.029348   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:58.029408   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:58.054996   54452 cri.go:89] found id: ""
	I1206 08:54:58.055010   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.055018   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:58.055023   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:58.055087   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:58.079698   54452 cri.go:89] found id: ""
	I1206 08:54:58.079711   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.079718   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:58.079723   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:58.079785   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:58.106383   54452 cri.go:89] found id: ""
	I1206 08:54:58.106396   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.106403   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:58.106408   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:58.106467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:58.135301   54452 cri.go:89] found id: ""
	I1206 08:54:58.135315   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.135325   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:58.135330   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:58.135431   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:58.165240   54452 cri.go:89] found id: ""
	I1206 08:54:58.165255   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.165262   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:58.165269   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:58.165279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:58.176468   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:58.176483   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:58.263783   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:58.246836   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.247297   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.255628   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.256461   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.259475   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:58.246836   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.247297   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.255628   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.256461   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.259475   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:58.263793   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:58.263806   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:58.336059   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:58.336078   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:58.364550   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:58.364565   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:00.926395   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:00.936607   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:00.936669   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:00.961767   54452 cri.go:89] found id: ""
	I1206 08:55:00.961781   54452 logs.go:282] 0 containers: []
	W1206 08:55:00.961788   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:00.961793   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:00.961855   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:00.987655   54452 cri.go:89] found id: ""
	I1206 08:55:00.987671   54452 logs.go:282] 0 containers: []
	W1206 08:55:00.987678   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:00.987684   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:00.987753   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:01.017321   54452 cri.go:89] found id: ""
	I1206 08:55:01.017335   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.017342   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:01.017347   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:01.017405   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:01.043120   54452 cri.go:89] found id: ""
	I1206 08:55:01.043134   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.043140   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:01.043146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:01.043208   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:01.069934   54452 cri.go:89] found id: ""
	I1206 08:55:01.069951   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.069958   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:01.069967   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:01.070037   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:01.095743   54452 cri.go:89] found id: ""
	I1206 08:55:01.095757   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.095765   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:01.095772   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:01.095832   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:01.120915   54452 cri.go:89] found id: ""
	I1206 08:55:01.120933   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.120940   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:01.120948   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:01.120958   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:01.179366   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:01.179392   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:01.191802   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:01.191818   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:01.292667   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:01.282943   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284116   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284837   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.286639   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.287228   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:01.282943   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284116   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284837   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.286639   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.287228   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:01.292676   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:01.292687   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:01.357710   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:01.357729   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:03.889702   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:03.900135   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:03.900194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:03.926097   54452 cri.go:89] found id: ""
	I1206 08:55:03.926122   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.926129   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:03.926135   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:03.926204   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:03.950796   54452 cri.go:89] found id: ""
	I1206 08:55:03.950810   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.950818   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:03.950823   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:03.950881   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:03.976998   54452 cri.go:89] found id: ""
	I1206 08:55:03.977012   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.977018   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:03.977024   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:03.977083   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:04.004847   54452 cri.go:89] found id: ""
	I1206 08:55:04.004862   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.004870   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:04.004876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:04.004943   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:04.030715   54452 cri.go:89] found id: ""
	I1206 08:55:04.030729   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.030737   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:04.030742   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:04.030806   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:04.056324   54452 cri.go:89] found id: ""
	I1206 08:55:04.056338   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.056345   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:04.056351   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:04.056412   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:04.082124   54452 cri.go:89] found id: ""
	I1206 08:55:04.082137   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.082145   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:04.082152   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:04.082163   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:04.138719   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:04.138737   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:04.150252   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:04.150269   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:04.220848   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:04.209917   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.210563   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212138   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212692   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.214187   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:04.209917   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.210563   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212138   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212692   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.214187   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:04.220858   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:04.220868   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:04.293646   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:04.293665   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:06.823180   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:06.833518   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:06.833576   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:06.863092   54452 cri.go:89] found id: ""
	I1206 08:55:06.863106   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.863113   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:06.863119   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:06.863177   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:06.888504   54452 cri.go:89] found id: ""
	I1206 08:55:06.888519   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.888525   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:06.888530   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:06.888595   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:06.918175   54452 cri.go:89] found id: ""
	I1206 08:55:06.918189   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.918197   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:06.918202   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:06.918261   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:06.944460   54452 cri.go:89] found id: ""
	I1206 08:55:06.944473   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.944480   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:06.944485   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:06.944551   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:06.973765   54452 cri.go:89] found id: ""
	I1206 08:55:06.973778   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.973786   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:06.973791   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:06.973852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:06.999311   54452 cri.go:89] found id: ""
	I1206 08:55:06.999324   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.999331   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:06.999337   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:06.999415   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:07.027677   54452 cri.go:89] found id: ""
	I1206 08:55:07.027690   54452 logs.go:282] 0 containers: []
	W1206 08:55:07.027697   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:07.027705   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:07.027715   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:07.086320   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:07.086338   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:07.097607   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:07.097623   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:07.161897   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:07.153185   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.154007   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.155730   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.156339   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.158053   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:07.153185   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.154007   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.155730   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.156339   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.158053   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:07.161907   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:07.161919   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:07.224772   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:07.224792   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:09.768328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:09.778939   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:09.779000   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:09.805473   54452 cri.go:89] found id: ""
	I1206 08:55:09.805487   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.805494   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:09.805499   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:09.805557   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:09.830605   54452 cri.go:89] found id: ""
	I1206 08:55:09.830618   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.830625   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:09.830630   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:09.830689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:09.855855   54452 cri.go:89] found id: ""
	I1206 08:55:09.855869   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.855876   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:09.855881   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:09.855937   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:09.880900   54452 cri.go:89] found id: ""
	I1206 08:55:09.880913   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.880920   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:09.880925   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:09.880981   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:09.906796   54452 cri.go:89] found id: ""
	I1206 08:55:09.906810   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.906817   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:09.906822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:09.906882   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:09.932980   54452 cri.go:89] found id: ""
	I1206 08:55:09.932996   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.933004   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:09.933009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:09.933081   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:09.961870   54452 cri.go:89] found id: ""
	I1206 08:55:09.961884   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.961892   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:09.961900   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:09.961922   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:10.018106   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:10.018129   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:10.031414   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:10.031441   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:10.103678   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:10.092903   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.093952   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.095756   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.096440   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.098120   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:10.092903   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.093952   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.095756   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.096440   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.098120   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:10.103689   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:10.103700   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:10.167044   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:10.167063   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:12.697325   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:12.707894   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:12.707958   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:12.732888   54452 cri.go:89] found id: ""
	I1206 08:55:12.732902   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.732914   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:12.732919   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:12.732975   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:12.756939   54452 cri.go:89] found id: ""
	I1206 08:55:12.756953   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.756960   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:12.756965   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:12.757026   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:12.785954   54452 cri.go:89] found id: ""
	I1206 08:55:12.785967   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.785974   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:12.785979   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:12.786037   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:12.810560   54452 cri.go:89] found id: ""
	I1206 08:55:12.810574   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.810581   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:12.810586   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:12.810643   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:12.835829   54452 cri.go:89] found id: ""
	I1206 08:55:12.835844   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.835851   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:12.835856   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:12.835917   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:12.860638   54452 cri.go:89] found id: ""
	I1206 08:55:12.860653   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.860660   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:12.860665   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:12.860723   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:12.885721   54452 cri.go:89] found id: ""
	I1206 08:55:12.885734   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.885742   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:12.885750   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:12.885760   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:12.944772   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:12.944793   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:12.956560   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:12.956577   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:13.023566   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:13.013901   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.014692   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.016414   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.017110   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.019101   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:13.013901   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.014692   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.016414   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.017110   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.019101   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:13.023586   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:13.023596   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:13.086592   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:13.086612   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:15.617835   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:15.628437   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:15.628524   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:15.657209   54452 cri.go:89] found id: ""
	I1206 08:55:15.657223   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.657230   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:15.657235   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:15.657297   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:15.681664   54452 cri.go:89] found id: ""
	I1206 08:55:15.681678   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.681685   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:15.681690   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:15.681748   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:15.707568   54452 cri.go:89] found id: ""
	I1206 08:55:15.707581   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.707588   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:15.707594   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:15.707654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:15.733456   54452 cri.go:89] found id: ""
	I1206 08:55:15.733470   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.733493   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:15.733499   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:15.733558   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:15.758882   54452 cri.go:89] found id: ""
	I1206 08:55:15.758896   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.758903   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:15.758908   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:15.758967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:15.784184   54452 cri.go:89] found id: ""
	I1206 08:55:15.784198   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.784205   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:15.784210   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:15.784269   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:15.809166   54452 cri.go:89] found id: ""
	I1206 08:55:15.809178   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.809186   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:15.809194   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:15.809204   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:15.865479   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:15.865498   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:15.876370   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:15.876386   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:15.949255   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:15.940278   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.941080   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.942741   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.943482   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.945251   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:15.940278   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.941080   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.942741   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.943482   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.945251   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:15.949277   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:15.949289   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:16.012838   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:16.012858   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:18.547536   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:18.557857   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:18.557924   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:18.583107   54452 cri.go:89] found id: ""
	I1206 08:55:18.583120   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.583128   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:18.583132   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:18.583192   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:18.608251   54452 cri.go:89] found id: ""
	I1206 08:55:18.608264   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.608271   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:18.608276   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:18.608333   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:18.634059   54452 cri.go:89] found id: ""
	I1206 08:55:18.634073   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.634080   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:18.634085   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:18.634158   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:18.659252   54452 cri.go:89] found id: ""
	I1206 08:55:18.659266   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.659273   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:18.659278   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:18.659338   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:18.687529   54452 cri.go:89] found id: ""
	I1206 08:55:18.687542   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.687549   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:18.687554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:18.687611   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:18.716705   54452 cri.go:89] found id: ""
	I1206 08:55:18.716719   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.716726   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:18.716731   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:18.716790   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:18.743861   54452 cri.go:89] found id: ""
	I1206 08:55:18.743875   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.743882   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:18.743890   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:18.743900   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:18.800501   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:18.800520   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:18.811514   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:18.811531   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:18.877593   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:18.868814   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.869581   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871247   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871996   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.873734   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:18.868814   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.869581   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871247   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871996   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.873734   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:18.877603   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:18.877614   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:18.945147   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:18.945175   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:21.473372   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:21.484974   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:21.485036   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:21.516585   54452 cri.go:89] found id: ""
	I1206 08:55:21.516598   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.516606   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:21.516611   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:21.516670   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:21.543917   54452 cri.go:89] found id: ""
	I1206 08:55:21.543930   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.543937   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:21.543943   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:21.544006   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:21.581932   54452 cri.go:89] found id: ""
	I1206 08:55:21.581946   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.581953   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:21.581958   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:21.582017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:21.606796   54452 cri.go:89] found id: ""
	I1206 08:55:21.606810   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.606817   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:21.606822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:21.606885   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:21.632673   54452 cri.go:89] found id: ""
	I1206 08:55:21.632686   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.632693   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:21.632698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:21.632791   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:21.656595   54452 cri.go:89] found id: ""
	I1206 08:55:21.656609   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.656616   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:21.656621   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:21.656681   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:21.681710   54452 cri.go:89] found id: ""
	I1206 08:55:21.681723   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.681730   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:21.681738   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:21.681747   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:21.737731   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:21.737750   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:21.748929   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:21.748944   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:21.814714   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:21.804423   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.805260   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807123   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807866   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.809673   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:21.804423   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.805260   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807123   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807866   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.809673   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:21.814725   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:21.814737   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:21.878842   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:21.878860   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:24.408240   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:24.418359   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:24.418420   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:24.445088   54452 cri.go:89] found id: ""
	I1206 08:55:24.445102   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.445109   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:24.445115   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:24.445218   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:24.481785   54452 cri.go:89] found id: ""
	I1206 08:55:24.481799   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.481807   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:24.481812   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:24.481871   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:24.514861   54452 cri.go:89] found id: ""
	I1206 08:55:24.514875   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.514882   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:24.514888   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:24.514951   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:24.545514   54452 cri.go:89] found id: ""
	I1206 08:55:24.545528   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.545535   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:24.545540   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:24.545604   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:24.571688   54452 cri.go:89] found id: ""
	I1206 08:55:24.571703   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.571710   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:24.571715   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:24.571780   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:24.596172   54452 cri.go:89] found id: ""
	I1206 08:55:24.596192   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.596200   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:24.596205   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:24.596267   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:24.621684   54452 cri.go:89] found id: ""
	I1206 08:55:24.621698   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.621706   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:24.621713   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:24.621728   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:24.683261   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:24.683279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:24.717098   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:24.717115   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:24.774777   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:24.774797   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:24.786405   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:24.786422   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:24.852542   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:24.844316   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.844764   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846310   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846629   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.848126   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:24.844316   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.844764   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846310   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846629   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.848126   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:27.352798   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:27.363390   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:27.363453   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:27.390863   54452 cri.go:89] found id: ""
	I1206 08:55:27.390877   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.390884   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:27.390891   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:27.390950   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:27.419763   54452 cri.go:89] found id: ""
	I1206 08:55:27.419777   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.419784   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:27.419789   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:27.419843   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:27.443855   54452 cri.go:89] found id: ""
	I1206 08:55:27.443868   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.443875   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:27.443880   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:27.443937   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:27.472073   54452 cri.go:89] found id: ""
	I1206 08:55:27.472086   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.472093   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:27.472099   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:27.472157   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:27.505330   54452 cri.go:89] found id: ""
	I1206 08:55:27.505344   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.505352   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:27.505357   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:27.505414   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:27.533936   54452 cri.go:89] found id: ""
	I1206 08:55:27.533950   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.533957   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:27.533962   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:27.534017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:27.562283   54452 cri.go:89] found id: ""
	I1206 08:55:27.562296   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.562303   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:27.562311   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:27.562320   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:27.619092   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:27.619110   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:27.630324   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:27.630339   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:27.695241   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:27.686898   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.687546   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689114   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689707   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.691358   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:27.686898   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.687546   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689114   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689707   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.691358   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:27.695251   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:27.695266   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:27.757877   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:27.757895   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:30.286157   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:30.296567   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:30.296625   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:30.321390   54452 cri.go:89] found id: ""
	I1206 08:55:30.321405   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.321413   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:30.321418   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:30.321480   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:30.350054   54452 cri.go:89] found id: ""
	I1206 08:55:30.350068   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.350075   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:30.350083   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:30.350149   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:30.375330   54452 cri.go:89] found id: ""
	I1206 08:55:30.375350   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.375358   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:30.375363   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:30.375445   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:30.406133   54452 cri.go:89] found id: ""
	I1206 08:55:30.406146   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.406153   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:30.406158   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:30.406217   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:30.434180   54452 cri.go:89] found id: ""
	I1206 08:55:30.434195   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.434202   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:30.434207   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:30.434272   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:30.461023   54452 cri.go:89] found id: ""
	I1206 08:55:30.461037   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.461044   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:30.461049   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:30.461107   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:30.493229   54452 cri.go:89] found id: ""
	I1206 08:55:30.493243   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.493250   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:30.493268   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:30.493279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:30.556454   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:30.556473   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:30.567243   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:30.567258   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:30.630618   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:30.622515   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.623325   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.624965   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.625291   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.626789   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:30.622515   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.623325   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.624965   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.625291   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.626789   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:30.630628   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:30.630638   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:30.692365   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:30.692384   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:33.222243   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:33.233203   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:33.233264   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:33.259086   54452 cri.go:89] found id: ""
	I1206 08:55:33.259099   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.259107   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:33.259113   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:33.259175   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:33.285885   54452 cri.go:89] found id: ""
	I1206 08:55:33.285912   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.285920   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:33.285926   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:33.286002   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:33.313522   54452 cri.go:89] found id: ""
	I1206 08:55:33.313536   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.313543   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:33.313554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:33.313614   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:33.343303   54452 cri.go:89] found id: ""
	I1206 08:55:33.343318   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.343335   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:33.343341   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:33.343434   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:33.372461   54452 cri.go:89] found id: ""
	I1206 08:55:33.372475   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.372482   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:33.372488   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:33.372556   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:33.398660   54452 cri.go:89] found id: ""
	I1206 08:55:33.398674   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.398682   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:33.398695   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:33.398770   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:33.425653   54452 cri.go:89] found id: ""
	I1206 08:55:33.425667   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.425675   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:33.425683   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:33.425693   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:33.436575   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:33.436591   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:33.519919   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:33.509857   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.511357   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.512054   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.513835   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.514450   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:33.509857   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.511357   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.512054   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.513835   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.514450   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:33.519928   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:33.519939   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:33.584991   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:33.585010   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:33.617158   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:33.617175   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:36.180867   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:36.191295   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:36.191369   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:36.215504   54452 cri.go:89] found id: ""
	I1206 08:55:36.215518   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.215525   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:36.215530   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:36.215586   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:36.241860   54452 cri.go:89] found id: ""
	I1206 08:55:36.241874   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.241881   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:36.241886   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:36.241948   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:36.270206   54452 cri.go:89] found id: ""
	I1206 08:55:36.270220   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.270227   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:36.270232   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:36.270292   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:36.297638   54452 cri.go:89] found id: ""
	I1206 08:55:36.297651   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.297658   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:36.297663   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:36.297721   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:36.327655   54452 cri.go:89] found id: ""
	I1206 08:55:36.327681   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.327689   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:36.327694   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:36.327764   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:36.353797   54452 cri.go:89] found id: ""
	I1206 08:55:36.353811   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.353818   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:36.353825   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:36.353884   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:36.378781   54452 cri.go:89] found id: ""
	I1206 08:55:36.378795   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.378802   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:36.378810   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:36.378823   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:36.435517   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:36.435537   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:36.446663   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:36.446679   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:36.538183   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:36.527758   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.528583   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.530703   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.531276   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.534098   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:36.527758   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.528583   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.530703   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.531276   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.534098   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:36.538193   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:36.538203   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:36.601364   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:36.601383   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:39.129686   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:39.140306   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:39.140375   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:39.169862   54452 cri.go:89] found id: ""
	I1206 08:55:39.169876   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.169883   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:39.169889   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:39.169952   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:39.195755   54452 cri.go:89] found id: ""
	I1206 08:55:39.195771   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.195778   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:39.195784   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:39.195842   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:39.220719   54452 cri.go:89] found id: ""
	I1206 08:55:39.220732   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.220739   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:39.220744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:39.220801   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:39.249535   54452 cri.go:89] found id: ""
	I1206 08:55:39.249549   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.249556   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:39.249561   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:39.249620   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:39.281267   54452 cri.go:89] found id: ""
	I1206 08:55:39.281281   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.281288   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:39.281293   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:39.281379   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:39.306847   54452 cri.go:89] found id: ""
	I1206 08:55:39.306860   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.306867   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:39.306873   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:39.306933   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:39.334023   54452 cri.go:89] found id: ""
	I1206 08:55:39.334036   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.334057   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:39.334064   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:39.334073   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:39.363589   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:39.363604   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:39.420152   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:39.420169   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:39.430815   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:39.430830   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:39.513246   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:39.503975   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.504808   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.506588   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.507212   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.508933   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:39.503975   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.504808   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.506588   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.507212   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.508933   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:39.513256   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:39.513266   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:42.085786   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:42.098317   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:42.098387   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:42.134671   54452 cri.go:89] found id: ""
	I1206 08:55:42.134686   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.134695   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:42.134705   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:42.134775   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:42.167474   54452 cri.go:89] found id: ""
	I1206 08:55:42.167489   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.167498   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:42.167505   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:42.167575   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:42.202078   54452 cri.go:89] found id: ""
	I1206 08:55:42.202093   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.202100   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:42.202106   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:42.202171   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:42.228525   54452 cri.go:89] found id: ""
	I1206 08:55:42.228539   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.228546   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:42.228552   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:42.228621   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:42.257322   54452 cri.go:89] found id: ""
	I1206 08:55:42.257337   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.257344   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:42.257350   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:42.257457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:42.284221   54452 cri.go:89] found id: ""
	I1206 08:55:42.284235   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.284253   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:42.284259   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:42.284329   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:42.311654   54452 cri.go:89] found id: ""
	I1206 08:55:42.311668   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.311675   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:42.311683   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:42.311694   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:42.368273   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:42.368294   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:42.379477   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:42.379493   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:42.443515   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:42.434726   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.435504   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437161   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437774   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.439236   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:42.434726   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.435504   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437161   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437774   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.439236   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:42.443526   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:42.443543   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:42.512858   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:42.512878   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:45.043040   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:45.068009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:45.068076   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:45.109801   54452 cri.go:89] found id: ""
	I1206 08:55:45.109815   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.109823   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:45.109829   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:45.109896   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:45.149825   54452 cri.go:89] found id: ""
	I1206 08:55:45.149841   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.149849   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:45.149855   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:45.149929   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:45.187416   54452 cri.go:89] found id: ""
	I1206 08:55:45.187433   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.187441   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:45.187446   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:45.187520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:45.235891   54452 cri.go:89] found id: ""
	I1206 08:55:45.235908   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.235916   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:45.235922   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:45.236066   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:45.279650   54452 cri.go:89] found id: ""
	I1206 08:55:45.279665   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.279673   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:45.279681   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:45.279750   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:45.325794   54452 cri.go:89] found id: ""
	I1206 08:55:45.325844   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.325871   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:45.325893   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:45.325962   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:45.357237   54452 cri.go:89] found id: ""
	I1206 08:55:45.357251   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.357258   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:45.357266   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:45.357291   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:45.385704   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:45.385720   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:45.442819   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:45.442837   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:45.454504   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:45.454523   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:45.547110   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:45.538939   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.539311   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.540633   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.541395   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.542989   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:45.538939   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.539311   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.540633   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.541395   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.542989   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:45.547119   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:45.547133   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:48.116344   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:48.126956   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:48.127022   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:48.152657   54452 cri.go:89] found id: ""
	I1206 08:55:48.152671   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.152678   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:48.152684   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:48.152743   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:48.182395   54452 cri.go:89] found id: ""
	I1206 08:55:48.182409   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.182417   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:48.182422   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:48.182494   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:48.211297   54452 cri.go:89] found id: ""
	I1206 08:55:48.211310   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.211327   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:48.211333   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:48.211402   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:48.236544   54452 cri.go:89] found id: ""
	I1206 08:55:48.236558   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.236565   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:48.236571   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:48.236627   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:48.262553   54452 cri.go:89] found id: ""
	I1206 08:55:48.262570   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.262582   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:48.262587   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:48.262680   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:48.295466   54452 cri.go:89] found id: ""
	I1206 08:55:48.295488   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.295495   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:48.295506   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:48.295586   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:48.321818   54452 cri.go:89] found id: ""
	I1206 08:55:48.321830   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.321837   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:48.321845   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:48.321856   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:48.378211   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:48.378229   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:48.389232   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:48.389255   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:48.456700   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:48.448592   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.449583   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.450577   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.451171   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.452831   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:48.448592   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.449583   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.450577   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.451171   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.452831   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:48.456711   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:48.456720   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:48.523317   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:48.523335   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:51.052796   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:51.063850   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:51.063912   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:51.089613   54452 cri.go:89] found id: ""
	I1206 08:55:51.089628   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.089635   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:51.089643   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:51.089727   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:51.116588   54452 cri.go:89] found id: ""
	I1206 08:55:51.116601   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.116609   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:51.116614   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:51.116679   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:51.146172   54452 cri.go:89] found id: ""
	I1206 08:55:51.146186   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.146193   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:51.146199   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:51.146266   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:51.172046   54452 cri.go:89] found id: ""
	I1206 08:55:51.172071   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.172078   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:51.172084   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:51.172163   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:51.200464   54452 cri.go:89] found id: ""
	I1206 08:55:51.200477   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.200495   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:51.200501   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:51.200561   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:51.229170   54452 cri.go:89] found id: ""
	I1206 08:55:51.229184   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.229191   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:51.229196   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:51.229254   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:51.254375   54452 cri.go:89] found id: ""
	I1206 08:55:51.254389   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.254396   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:51.254403   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:51.254413   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:51.317370   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:51.317390   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:51.344624   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:51.344642   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:51.402739   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:51.402759   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:51.413613   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:51.413629   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:51.483207   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:51.470850   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.471424   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.472991   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.473416   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.475113   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:51.470850   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.471424   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.472991   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.473416   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.475113   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:53.983859   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:53.997260   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:53.997326   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:54.024774   54452 cri.go:89] found id: ""
	I1206 08:55:54.024788   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.024795   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:54.024801   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:54.024866   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:54.050802   54452 cri.go:89] found id: ""
	I1206 08:55:54.050830   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.050837   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:54.050842   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:54.050911   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:54.079419   54452 cri.go:89] found id: ""
	I1206 08:55:54.079433   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.079440   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:54.079446   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:54.079517   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:54.104851   54452 cri.go:89] found id: ""
	I1206 08:55:54.104864   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.104871   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:54.104876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:54.104933   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:54.133815   54452 cri.go:89] found id: ""
	I1206 08:55:54.133829   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.133847   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:54.133853   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:54.133909   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:54.163047   54452 cri.go:89] found id: ""
	I1206 08:55:54.163071   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.163078   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:54.163083   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:54.163150   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:54.190227   54452 cri.go:89] found id: ""
	I1206 08:55:54.190242   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.190249   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:54.190263   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:54.190273   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:54.246189   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:54.246208   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:54.257068   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:54.257083   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:54.322094   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:54.313214   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.313895   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.315763   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.316388   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.318125   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:54.313214   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.313895   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.315763   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.316388   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.318125   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:54.322104   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:54.322114   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:54.385131   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:54.385150   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:56.917265   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:56.927438   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:56.927499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:56.951596   54452 cri.go:89] found id: ""
	I1206 08:55:56.951611   54452 logs.go:282] 0 containers: []
	W1206 08:55:56.951618   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:56.951623   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:56.951685   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:56.975635   54452 cri.go:89] found id: ""
	I1206 08:55:56.975649   54452 logs.go:282] 0 containers: []
	W1206 08:55:56.975656   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:56.975661   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:56.975718   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:57.005275   54452 cri.go:89] found id: ""
	I1206 08:55:57.005289   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.005296   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:57.005302   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:57.005370   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:57.031301   54452 cri.go:89] found id: ""
	I1206 08:55:57.031315   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.031333   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:57.031339   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:57.031422   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:57.057133   54452 cri.go:89] found id: ""
	I1206 08:55:57.057146   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.057153   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:57.057159   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:57.057221   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:57.081358   54452 cri.go:89] found id: ""
	I1206 08:55:57.081371   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.081378   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:57.081384   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:57.081442   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:57.116018   54452 cri.go:89] found id: ""
	I1206 08:55:57.116033   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.116049   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:57.116057   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:57.116067   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:57.171598   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:57.171615   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:57.182153   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:57.182169   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:57.245457   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:57.237416   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.237828   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.239402   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.240057   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.241674   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:57.237416   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.237828   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.239402   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.240057   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.241674   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:57.245466   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:57.245476   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:57.307969   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:57.307987   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:59.836840   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:59.846983   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:59.847044   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:59.871818   54452 cri.go:89] found id: ""
	I1206 08:55:59.871831   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.871838   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:59.871844   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:59.871904   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:59.896695   54452 cri.go:89] found id: ""
	I1206 08:55:59.896709   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.896716   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:59.896721   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:59.896787   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:59.921887   54452 cri.go:89] found id: ""
	I1206 08:55:59.921911   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.921918   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:59.921924   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:59.921998   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:59.948824   54452 cri.go:89] found id: ""
	I1206 08:55:59.948837   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.948845   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:59.948850   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:59.948908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:59.974553   54452 cri.go:89] found id: ""
	I1206 08:55:59.974567   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.974575   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:59.974580   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:59.974638   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:00.057731   54452 cri.go:89] found id: ""
	I1206 08:56:00.057783   54452 logs.go:282] 0 containers: []
	W1206 08:56:00.057791   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:00.057798   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:00.058035   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:00.191639   54452 cri.go:89] found id: ""
	I1206 08:56:00.191655   54452 logs.go:282] 0 containers: []
	W1206 08:56:00.191663   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:00.191671   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:00.191685   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:00.488607   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:00.462504   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.463297   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477164   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477991   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.479845   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:00.462504   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.463297   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477164   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477991   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.479845   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:00.488619   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:00.488632   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:00.602413   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:00.602434   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:00.637181   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:00.637200   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:00.701850   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:00.701868   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:03.215126   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:03.225397   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:03.225464   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:03.253115   54452 cri.go:89] found id: ""
	I1206 08:56:03.253128   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.253135   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:03.253143   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:03.253203   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:03.278704   54452 cri.go:89] found id: ""
	I1206 08:56:03.278717   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.278724   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:03.278730   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:03.278788   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:03.304400   54452 cri.go:89] found id: ""
	I1206 08:56:03.304414   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.304421   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:03.304427   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:03.304484   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:03.330915   54452 cri.go:89] found id: ""
	I1206 08:56:03.330927   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.330934   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:03.330939   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:03.331000   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:03.356123   54452 cri.go:89] found id: ""
	I1206 08:56:03.356136   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.356143   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:03.356149   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:03.356205   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:03.381497   54452 cri.go:89] found id: ""
	I1206 08:56:03.381511   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.381517   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:03.381523   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:03.381582   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:03.405821   54452 cri.go:89] found id: ""
	I1206 08:56:03.405834   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.405841   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:03.405849   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:03.405859   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:03.462897   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:03.462918   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:03.474378   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:03.474393   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:03.559522   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:03.549699   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.550344   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.552761   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.554016   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.555406   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:03.549699   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.550344   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.552761   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.554016   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.555406   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:03.559532   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:03.559545   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:03.626698   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:03.626716   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:06.154123   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:06.164837   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:06.164908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:06.191102   54452 cri.go:89] found id: ""
	I1206 08:56:06.191115   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.191123   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:06.191128   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:06.191194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:06.215815   54452 cri.go:89] found id: ""
	I1206 08:56:06.215829   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.215836   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:06.215841   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:06.215901   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:06.241431   54452 cri.go:89] found id: ""
	I1206 08:56:06.241445   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.241452   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:06.241457   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:06.241520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:06.266677   54452 cri.go:89] found id: ""
	I1206 08:56:06.266692   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.266699   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:06.266705   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:06.266768   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:06.290924   54452 cri.go:89] found id: ""
	I1206 08:56:06.290940   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.290948   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:06.290953   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:06.291015   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:06.315767   54452 cri.go:89] found id: ""
	I1206 08:56:06.315781   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.315788   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:06.315794   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:06.315852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:06.341271   54452 cri.go:89] found id: ""
	I1206 08:56:06.341284   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.341291   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:06.341298   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:06.341309   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:06.369777   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:06.369793   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:06.426976   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:06.426995   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:06.438111   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:06.438126   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:06.515349   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:06.504075   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.504993   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.506819   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.507499   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.510593   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:06.504075   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.504993   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.506819   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.507499   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.510593   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:06.515366   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:06.515403   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:09.084957   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:09.095918   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:09.095982   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:09.122788   54452 cri.go:89] found id: ""
	I1206 08:56:09.122802   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.122816   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:09.122822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:09.122886   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:09.150280   54452 cri.go:89] found id: ""
	I1206 08:56:09.150296   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.150303   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:09.150308   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:09.150370   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:09.175968   54452 cri.go:89] found id: ""
	I1206 08:56:09.175982   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.175989   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:09.175995   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:09.176054   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:09.205200   54452 cri.go:89] found id: ""
	I1206 08:56:09.205214   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.205221   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:09.205226   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:09.205284   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:09.229722   54452 cri.go:89] found id: ""
	I1206 08:56:09.229741   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.229758   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:09.229764   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:09.229823   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:09.253449   54452 cri.go:89] found id: ""
	I1206 08:56:09.253462   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.253469   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:09.253475   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:09.253532   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:09.278075   54452 cri.go:89] found id: ""
	I1206 08:56:09.278096   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.278103   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:09.278111   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:09.278127   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:09.334207   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:09.334224   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:09.345268   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:09.345284   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:09.411030   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:09.402872   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.403332   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.404994   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.405444   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.406900   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:09.402872   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.403332   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.404994   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.405444   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.406900   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:09.411046   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:09.411057   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:09.477250   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:09.477268   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:12.012172   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:12.023603   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:12.023666   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:12.049522   54452 cri.go:89] found id: ""
	I1206 08:56:12.049536   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.049544   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:12.049549   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:12.049616   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:12.079322   54452 cri.go:89] found id: ""
	I1206 08:56:12.079336   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.079343   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:12.079348   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:12.079434   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:12.104615   54452 cri.go:89] found id: ""
	I1206 08:56:12.104629   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.104636   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:12.104642   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:12.104698   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:12.129522   54452 cri.go:89] found id: ""
	I1206 08:56:12.129536   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.129542   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:12.129548   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:12.129603   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:12.154617   54452 cri.go:89] found id: ""
	I1206 08:56:12.154631   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.154637   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:12.154642   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:12.154701   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:12.180772   54452 cri.go:89] found id: ""
	I1206 08:56:12.180786   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.180793   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:12.180798   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:12.180860   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:12.204559   54452 cri.go:89] found id: ""
	I1206 08:56:12.204573   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.204585   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:12.204593   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:12.204605   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:12.267761   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:12.267780   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:12.295680   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:12.295696   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:12.355740   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:12.355759   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:12.367574   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:12.367589   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:12.438034   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:12.429169   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.429845   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.431592   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.432279   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.433870   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:12.429169   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.429845   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.431592   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.432279   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.433870   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:14.938326   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:14.948550   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:14.948610   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:14.974812   54452 cri.go:89] found id: ""
	I1206 08:56:14.974825   54452 logs.go:282] 0 containers: []
	W1206 08:56:14.974832   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:14.974843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:14.974901   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:15.033969   54452 cri.go:89] found id: ""
	I1206 08:56:15.033985   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.034002   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:15.034009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:15.034081   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:15.061932   54452 cri.go:89] found id: ""
	I1206 08:56:15.061946   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.061954   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:15.061959   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:15.062054   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:15.092717   54452 cri.go:89] found id: ""
	I1206 08:56:15.092731   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.092738   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:15.092744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:15.092804   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:15.119219   54452 cri.go:89] found id: ""
	I1206 08:56:15.119234   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.119242   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:15.119247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:15.119309   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:15.149464   54452 cri.go:89] found id: ""
	I1206 08:56:15.149477   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.149485   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:15.149490   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:15.149550   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:15.175614   54452 cri.go:89] found id: ""
	I1206 08:56:15.175628   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.175635   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:15.175643   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:15.175653   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:15.239770   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:15.239789   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:15.267874   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:15.267891   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:15.327229   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:15.327247   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:15.338540   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:15.338557   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:15.402152   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:15.393377   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.393759   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395003   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395462   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.397184   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:15.393377   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.393759   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395003   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395462   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.397184   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:17.903812   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:17.914165   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:17.914229   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:17.942343   54452 cri.go:89] found id: ""
	I1206 08:56:17.942357   54452 logs.go:282] 0 containers: []
	W1206 08:56:17.942363   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:17.942369   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:17.942427   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:17.972379   54452 cri.go:89] found id: ""
	I1206 08:56:17.972394   54452 logs.go:282] 0 containers: []
	W1206 08:56:17.972401   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:17.972406   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:17.972474   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:18.000726   54452 cri.go:89] found id: ""
	I1206 08:56:18.000740   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.000762   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:18.000768   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:18.000832   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:18.027348   54452 cri.go:89] found id: ""
	I1206 08:56:18.027406   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.027418   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:18.027431   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:18.027515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:18.055911   54452 cri.go:89] found id: ""
	I1206 08:56:18.055925   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.055933   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:18.055937   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:18.055994   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:18.085367   54452 cri.go:89] found id: ""
	I1206 08:56:18.085381   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.085392   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:18.085398   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:18.085466   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:18.110486   54452 cri.go:89] found id: ""
	I1206 08:56:18.110505   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.110513   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:18.110520   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:18.110531   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:18.174849   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:18.166389   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.166788   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168371   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168921   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.170369   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:18.166389   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.166788   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168371   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168921   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.170369   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:18.174859   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:18.174870   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:18.237754   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:18.237774   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:18.268012   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:18.268033   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:18.324652   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:18.324671   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:20.837649   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:20.848772   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:20.848844   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:20.875162   54452 cri.go:89] found id: ""
	I1206 08:56:20.875177   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.875184   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:20.875190   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:20.875260   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:20.900599   54452 cri.go:89] found id: ""
	I1206 08:56:20.900613   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.900620   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:20.900625   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:20.900683   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:20.928195   54452 cri.go:89] found id: ""
	I1206 08:56:20.928209   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.928216   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:20.928221   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:20.928288   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:20.952510   54452 cri.go:89] found id: ""
	I1206 08:56:20.952524   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.952532   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:20.952537   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:20.952594   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:20.976651   54452 cri.go:89] found id: ""
	I1206 08:56:20.976665   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.976672   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:20.976677   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:20.976747   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:21.003279   54452 cri.go:89] found id: ""
	I1206 08:56:21.003294   54452 logs.go:282] 0 containers: []
	W1206 08:56:21.003301   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:21.003306   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:21.003372   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:21.029382   54452 cri.go:89] found id: ""
	I1206 08:56:21.029396   54452 logs.go:282] 0 containers: []
	W1206 08:56:21.029403   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:21.029411   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:21.029421   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:21.091035   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:21.082849   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.083705   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085252   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085569   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.087050   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:21.082849   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.083705   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085252   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085569   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.087050   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:21.091049   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:21.091059   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:21.153084   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:21.153102   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:21.179992   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:21.180009   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:21.242302   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:21.242323   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:23.753350   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:23.764153   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:23.764212   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:23.794093   54452 cri.go:89] found id: ""
	I1206 08:56:23.794108   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.794115   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:23.794121   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:23.794192   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:23.818597   54452 cri.go:89] found id: ""
	I1206 08:56:23.818611   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.818618   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:23.818623   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:23.818681   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:23.845861   54452 cri.go:89] found id: ""
	I1206 08:56:23.845875   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.845882   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:23.845887   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:23.845951   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:23.871357   54452 cri.go:89] found id: ""
	I1206 08:56:23.871371   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.871423   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:23.871428   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:23.871486   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:23.895904   54452 cri.go:89] found id: ""
	I1206 08:56:23.895918   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.895926   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:23.895931   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:23.895998   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:23.921905   54452 cri.go:89] found id: ""
	I1206 08:56:23.921918   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.921925   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:23.921931   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:23.921988   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:23.946488   54452 cri.go:89] found id: ""
	I1206 08:56:23.946512   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.946520   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:23.946529   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:23.946539   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:24.002888   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:24.002907   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:24.015146   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:24.015170   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:24.085686   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:24.074786   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.075755   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.078321   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.079336   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.080390   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:24.074786   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.075755   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.078321   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.079336   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.080390   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:24.085697   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:24.085707   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:24.149216   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:24.149233   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:26.686769   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:26.697125   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:26.697183   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:26.728496   54452 cri.go:89] found id: ""
	I1206 08:56:26.728510   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.728527   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:26.728532   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:26.728597   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:26.755101   54452 cri.go:89] found id: ""
	I1206 08:56:26.755115   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.755130   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:26.755136   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:26.755195   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:26.785198   54452 cri.go:89] found id: ""
	I1206 08:56:26.785211   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.785229   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:26.785234   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:26.785298   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:26.816431   54452 cri.go:89] found id: ""
	I1206 08:56:26.816445   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.816452   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:26.816457   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:26.816515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:26.841875   54452 cri.go:89] found id: ""
	I1206 08:56:26.841889   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.841897   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:26.841902   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:26.841964   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:26.868358   54452 cri.go:89] found id: ""
	I1206 08:56:26.868372   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.868379   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:26.868384   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:26.868456   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:26.895528   54452 cri.go:89] found id: ""
	I1206 08:56:26.895541   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.895547   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:26.895555   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:26.895564   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:26.961952   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:26.961970   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:27.006459   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:27.006475   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:27.063666   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:27.063685   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:27.074993   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:27.075011   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:27.138852   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:27.130623   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.131223   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.132971   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.133326   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.134833   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:27.130623   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.131223   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.132971   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.133326   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.134833   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:29.639504   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:29.649774   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:29.649848   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:29.679629   54452 cri.go:89] found id: ""
	I1206 08:56:29.679642   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.679650   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:29.679655   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:29.679716   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:29.704535   54452 cri.go:89] found id: ""
	I1206 08:56:29.704550   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.704557   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:29.704563   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:29.704635   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:29.737627   54452 cri.go:89] found id: ""
	I1206 08:56:29.737640   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.737647   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:29.737652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:29.737709   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:29.767083   54452 cri.go:89] found id: ""
	I1206 08:56:29.767097   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.767104   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:29.767109   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:29.767166   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:29.793665   54452 cri.go:89] found id: ""
	I1206 08:56:29.793685   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.793693   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:29.793698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:29.793761   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:29.822695   54452 cri.go:89] found id: ""
	I1206 08:56:29.822709   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.822717   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:29.822722   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:29.822781   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:29.848347   54452 cri.go:89] found id: ""
	I1206 08:56:29.848360   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.848380   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:29.848389   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:29.848399   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:29.911329   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:29.911349   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:29.939981   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:29.939996   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:30.001274   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:30.001296   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:30.022683   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:30.022703   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:30.138182   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:30.128285   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.129603   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.130253   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132024   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132540   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:30.128285   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.129603   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.130253   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132024   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132540   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:32.638423   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:32.648554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:32.648613   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:32.672719   54452 cri.go:89] found id: ""
	I1206 08:56:32.672733   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.672741   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:32.672745   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:32.672808   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:32.697375   54452 cri.go:89] found id: ""
	I1206 08:56:32.697389   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.697396   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:32.697401   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:32.697456   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:32.730608   54452 cri.go:89] found id: ""
	I1206 08:56:32.730621   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.730628   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:32.730633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:32.730690   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:32.756886   54452 cri.go:89] found id: ""
	I1206 08:56:32.756900   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.756906   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:32.756911   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:32.756967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:32.786416   54452 cri.go:89] found id: ""
	I1206 08:56:32.786429   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.786436   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:32.786441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:32.786499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:32.817852   54452 cri.go:89] found id: ""
	I1206 08:56:32.817866   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.817873   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:32.817878   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:32.817948   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:32.847789   54452 cri.go:89] found id: ""
	I1206 08:56:32.847803   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.847810   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:32.847817   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:32.847826   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:32.913422   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:32.904590   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.905149   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907029   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907428   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.909140   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:32.904590   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.905149   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907029   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907428   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.909140   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:32.913432   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:32.913443   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:32.979128   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:32.979147   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:33.009021   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:33.009038   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:33.066116   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:33.066134   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:35.577653   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:35.587677   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:35.587739   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:35.612385   54452 cri.go:89] found id: ""
	I1206 08:56:35.612398   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.612405   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:35.612416   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:35.612474   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:35.639348   54452 cri.go:89] found id: ""
	I1206 08:56:35.639362   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.639369   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:35.639395   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:35.639457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:35.662406   54452 cri.go:89] found id: ""
	I1206 08:56:35.662420   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.662427   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:35.662432   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:35.662494   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:35.686450   54452 cri.go:89] found id: ""
	I1206 08:56:35.686464   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.686471   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:35.686476   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:35.686535   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:35.715902   54452 cri.go:89] found id: ""
	I1206 08:56:35.715915   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.715922   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:35.715927   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:35.715986   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:35.753483   54452 cri.go:89] found id: ""
	I1206 08:56:35.753496   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.753503   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:35.753509   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:35.753571   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:35.787475   54452 cri.go:89] found id: ""
	I1206 08:56:35.787488   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.787495   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:35.787509   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:35.787520   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:35.799521   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:35.799536   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:35.865541   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:35.856956   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.857477   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859150   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859621   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.861412   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:35.856956   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.857477   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859150   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859621   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.861412   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:35.865551   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:35.865562   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:35.928394   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:35.928412   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:35.960163   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:35.960178   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:38.518969   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:38.529441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:38.529503   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:38.556742   54452 cri.go:89] found id: ""
	I1206 08:56:38.556756   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.556764   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:38.556769   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:38.556828   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:38.585575   54452 cri.go:89] found id: ""
	I1206 08:56:38.585589   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.585596   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:38.585602   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:38.585675   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:38.610698   54452 cri.go:89] found id: ""
	I1206 08:56:38.610713   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.610721   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:38.610726   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:38.610799   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:38.635789   54452 cri.go:89] found id: ""
	I1206 08:56:38.635802   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.635809   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:38.635814   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:38.635875   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:38.664415   54452 cri.go:89] found id: ""
	I1206 08:56:38.664429   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.664436   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:38.664441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:38.664499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:38.692373   54452 cri.go:89] found id: ""
	I1206 08:56:38.692387   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.692394   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:38.692400   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:38.692463   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:38.717762   54452 cri.go:89] found id: ""
	I1206 08:56:38.717776   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.717784   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:38.717791   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:38.717804   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:38.761801   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:38.761816   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:38.823195   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:38.823214   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:38.834338   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:38.834354   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:38.902350   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:38.894283   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895054   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895859   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897382   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897703   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:38.894283   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895054   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895859   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897382   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897703   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:38.902361   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:38.902372   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:41.468409   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:41.478754   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:41.478820   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:41.506969   54452 cri.go:89] found id: ""
	I1206 08:56:41.506982   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.506989   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:41.506997   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:41.507057   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:41.531981   54452 cri.go:89] found id: ""
	I1206 08:56:41.531995   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.532002   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:41.532007   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:41.532067   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:41.556489   54452 cri.go:89] found id: ""
	I1206 08:56:41.556503   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.556511   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:41.556516   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:41.556578   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:41.582188   54452 cri.go:89] found id: ""
	I1206 08:56:41.582202   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.582209   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:41.582224   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:41.582297   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:41.608043   54452 cri.go:89] found id: ""
	I1206 08:56:41.608065   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.608073   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:41.608078   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:41.608149   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:41.636701   54452 cri.go:89] found id: ""
	I1206 08:56:41.636714   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.636722   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:41.636728   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:41.636786   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:41.661109   54452 cri.go:89] found id: ""
	I1206 08:56:41.661123   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.661131   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:41.661138   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:41.661147   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:41.718276   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:41.718293   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:41.731689   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:41.731704   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:41.813161   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:41.804518   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.805060   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.806862   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.807552   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.809239   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:41.804518   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.805060   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.806862   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.807552   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.809239   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:41.813171   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:41.813183   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:41.879169   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:41.879189   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:44.409328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:44.419475   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:44.419534   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:44.444626   54452 cri.go:89] found id: ""
	I1206 08:56:44.444640   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.444647   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:44.444652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:44.444709   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:44.469065   54452 cri.go:89] found id: ""
	I1206 08:56:44.469078   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.469085   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:44.469090   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:44.469154   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:44.492979   54452 cri.go:89] found id: ""
	I1206 08:56:44.492993   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.493000   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:44.493006   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:44.493065   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:44.517980   54452 cri.go:89] found id: ""
	I1206 08:56:44.517994   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.518012   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:44.518018   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:44.518084   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:44.550302   54452 cri.go:89] found id: ""
	I1206 08:56:44.550315   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.550322   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:44.550338   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:44.550411   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:44.574741   54452 cri.go:89] found id: ""
	I1206 08:56:44.574754   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.574773   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:44.574779   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:44.574844   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:44.599427   54452 cri.go:89] found id: ""
	I1206 08:56:44.599440   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.599447   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:44.599454   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:44.599464   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:44.655195   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:44.655213   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:44.666596   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:44.666611   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:44.743689   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:44.734701   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.735711   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737288   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737597   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.739087   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:44.734701   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.735711   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737288   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737597   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.739087   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:44.743706   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:44.743716   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:44.813114   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:44.813132   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:47.340486   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:47.350443   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:47.350502   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:47.381645   54452 cri.go:89] found id: ""
	I1206 08:56:47.381659   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.381666   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:47.381671   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:47.381732   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:47.408660   54452 cri.go:89] found id: ""
	I1206 08:56:47.408674   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.408681   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:47.408686   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:47.408751   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:47.434188   54452 cri.go:89] found id: ""
	I1206 08:56:47.434201   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.434208   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:47.434213   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:47.434272   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:47.463313   54452 cri.go:89] found id: ""
	I1206 08:56:47.463334   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.463342   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:47.463347   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:47.463437   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:47.491850   54452 cri.go:89] found id: ""
	I1206 08:56:47.491864   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.491871   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:47.491876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:47.491942   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:47.520200   54452 cri.go:89] found id: ""
	I1206 08:56:47.520214   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.520221   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:47.520226   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:47.520289   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:47.546930   54452 cri.go:89] found id: ""
	I1206 08:56:47.546943   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.546950   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:47.546958   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:47.546969   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:47.607002   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:47.607020   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:47.617961   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:47.617976   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:47.681928   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:47.673776   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.674574   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676165   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676631   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.678134   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:47.673776   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.674574   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676165   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676631   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.678134   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:47.681938   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:47.681949   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:47.749465   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:47.749483   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:50.280242   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:50.291127   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:50.291189   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:50.316285   54452 cri.go:89] found id: ""
	I1206 08:56:50.316299   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.316307   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:50.316312   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:50.316378   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:50.342947   54452 cri.go:89] found id: ""
	I1206 08:56:50.342961   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.342968   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:50.342973   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:50.343034   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:50.368308   54452 cri.go:89] found id: ""
	I1206 08:56:50.368322   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.368329   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:50.368334   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:50.368392   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:50.392557   54452 cri.go:89] found id: ""
	I1206 08:56:50.392571   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.392578   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:50.392583   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:50.392643   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:50.417455   54452 cri.go:89] found id: ""
	I1206 08:56:50.417469   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.417477   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:50.417482   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:50.417547   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:50.442791   54452 cri.go:89] found id: ""
	I1206 08:56:50.442805   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.442813   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:50.442818   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:50.442887   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:50.473290   54452 cri.go:89] found id: ""
	I1206 08:56:50.473304   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.473310   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:50.473318   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:50.473329   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:50.484225   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:50.484242   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:50.551034   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:50.542777   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.543204   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.544973   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.545586   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.547123   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:50.542777   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.543204   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.544973   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.545586   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.547123   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:50.551048   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:50.551059   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:50.614007   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:50.614025   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:50.642494   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:50.642510   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:53.201231   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:53.211652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:53.211712   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:53.237084   54452 cri.go:89] found id: ""
	I1206 08:56:53.237098   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.237106   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:53.237117   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:53.237179   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:53.265518   54452 cri.go:89] found id: ""
	I1206 08:56:53.265533   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.265541   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:53.265547   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:53.265619   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:53.291219   54452 cri.go:89] found id: ""
	I1206 08:56:53.291233   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.291242   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:53.291247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:53.291304   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:53.316119   54452 cri.go:89] found id: ""
	I1206 08:56:53.316135   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.316143   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:53.316148   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:53.316208   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:53.345553   54452 cri.go:89] found id: ""
	I1206 08:56:53.345566   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.345574   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:53.345579   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:53.345637   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:53.374116   54452 cri.go:89] found id: ""
	I1206 08:56:53.374130   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.374138   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:53.374144   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:53.374201   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:53.401450   54452 cri.go:89] found id: ""
	I1206 08:56:53.401463   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.401470   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:53.401488   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:53.401498   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:53.464628   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:53.464645   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:53.492208   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:53.492225   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:53.548199   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:53.548216   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:53.559872   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:53.559887   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:53.624790   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:53.616289   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.617036   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.618638   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.619245   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.620839   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:53.616289   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.617036   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.618638   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.619245   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.620839   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:56.126662   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:56.136918   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:56.136978   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:56.165346   54452 cri.go:89] found id: ""
	I1206 08:56:56.165359   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.165376   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:56.165382   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:56.165447   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:56.194525   54452 cri.go:89] found id: ""
	I1206 08:56:56.194538   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.194545   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:56.194562   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:56.194621   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:56.220295   54452 cri.go:89] found id: ""
	I1206 08:56:56.220309   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.220316   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:56.220321   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:56.220377   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:56.244567   54452 cri.go:89] found id: ""
	I1206 08:56:56.244580   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.244587   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:56.244592   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:56.244648   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:56.267992   54452 cri.go:89] found id: ""
	I1206 08:56:56.268005   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.268012   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:56.268018   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:56.268076   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:56.295817   54452 cri.go:89] found id: ""
	I1206 08:56:56.295830   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.295837   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:56.295843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:56.295904   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:56.319421   54452 cri.go:89] found id: ""
	I1206 08:56:56.319435   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.319442   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:56.319450   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:56.319460   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:56.350423   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:56.350439   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:56.407158   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:56.407176   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:56.417732   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:56.417747   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:56.488632   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:56.480052   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.480705   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.482573   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.483242   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.484311   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:56.480052   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.480705   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.482573   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.483242   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.484311   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:56.488642   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:56.488652   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:59.061980   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:59.072278   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:59.072339   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:59.101215   54452 cri.go:89] found id: ""
	I1206 08:56:59.101228   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.101235   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:59.101241   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:59.101302   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:59.127327   54452 cri.go:89] found id: ""
	I1206 08:56:59.127342   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.127349   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:59.127355   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:59.127442   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:59.152367   54452 cri.go:89] found id: ""
	I1206 08:56:59.152381   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.152388   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:59.152393   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:59.152461   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:59.176595   54452 cri.go:89] found id: ""
	I1206 08:56:59.176609   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.176616   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:59.176622   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:59.176680   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:59.201640   54452 cri.go:89] found id: ""
	I1206 08:56:59.201654   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.201661   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:59.201667   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:59.201725   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:59.228000   54452 cri.go:89] found id: ""
	I1206 08:56:59.228015   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.228023   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:59.228028   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:59.228097   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:59.254668   54452 cri.go:89] found id: ""
	I1206 08:56:59.254681   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.254688   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:59.254696   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:59.254707   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:59.284894   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:59.284910   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:59.342586   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:59.342604   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:59.354343   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:59.354368   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:59.422837   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:59.414293   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.414916   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416482   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416892   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.418605   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:59.414293   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.414916   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416482   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416892   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.418605   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:59.422847   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:59.422857   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:01.987724   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:02.004462   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:02.004525   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:02.037544   54452 cri.go:89] found id: ""
	I1206 08:57:02.037558   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.037565   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:02.037571   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:02.037629   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:02.064737   54452 cri.go:89] found id: ""
	I1206 08:57:02.064750   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.064759   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:02.064765   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:02.064822   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:02.090594   54452 cri.go:89] found id: ""
	I1206 08:57:02.090607   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.090615   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:02.090620   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:02.090677   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:02.118059   54452 cri.go:89] found id: ""
	I1206 08:57:02.118073   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.118080   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:02.118086   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:02.118142   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:02.147171   54452 cri.go:89] found id: ""
	I1206 08:57:02.147184   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.147191   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:02.147197   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:02.147258   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:02.178322   54452 cri.go:89] found id: ""
	I1206 08:57:02.178336   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.178343   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:02.178349   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:02.178409   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:02.206125   54452 cri.go:89] found id: ""
	I1206 08:57:02.206140   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.206148   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:02.206156   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:02.206166   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:02.268742   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:02.268760   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:02.298364   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:02.298379   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:02.360782   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:02.360799   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:02.372144   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:02.372159   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:02.440932   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:02.432342   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.433106   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435042   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435754   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.436799   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:02.432342   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.433106   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435042   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435754   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.436799   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:04.941190   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:04.951545   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:04.951607   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:04.989383   54452 cri.go:89] found id: ""
	I1206 08:57:04.989398   54452 logs.go:282] 0 containers: []
	W1206 08:57:04.989406   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:04.989413   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:04.989480   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:05.024563   54452 cri.go:89] found id: ""
	I1206 08:57:05.024580   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.024588   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:05.024593   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:05.024654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:05.054247   54452 cri.go:89] found id: ""
	I1206 08:57:05.054260   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.054267   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:05.054272   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:05.054332   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:05.079563   54452 cri.go:89] found id: ""
	I1206 08:57:05.079582   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.079589   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:05.079594   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:05.079654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:05.104268   54452 cri.go:89] found id: ""
	I1206 08:57:05.104281   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.104288   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:05.104294   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:05.104354   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:05.133366   54452 cri.go:89] found id: ""
	I1206 08:57:05.133389   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.133399   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:05.133404   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:05.133473   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:05.157604   54452 cri.go:89] found id: ""
	I1206 08:57:05.157618   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.157625   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:05.157633   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:05.157644   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:05.169011   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:05.169026   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:05.232729   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:05.223674   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.224539   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226385   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226913   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.228611   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:05.223674   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.224539   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226385   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226913   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.228611   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:05.232739   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:05.232750   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:05.295112   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:05.295130   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:05.323164   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:05.323180   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:07.880424   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:07.890491   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:07.890546   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:07.919674   54452 cri.go:89] found id: ""
	I1206 08:57:07.919688   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.919695   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:07.919702   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:07.919765   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:07.944058   54452 cri.go:89] found id: ""
	I1206 08:57:07.944072   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.944080   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:07.944085   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:07.944143   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:07.975197   54452 cri.go:89] found id: ""
	I1206 08:57:07.975211   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.975219   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:07.975223   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:07.975286   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:08.003528   54452 cri.go:89] found id: ""
	I1206 08:57:08.003551   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.003559   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:08.003565   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:08.003632   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:08.042231   54452 cri.go:89] found id: ""
	I1206 08:57:08.042244   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.042251   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:08.042264   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:08.042340   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:08.070769   54452 cri.go:89] found id: ""
	I1206 08:57:08.070783   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.070800   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:08.070806   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:08.070863   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:08.095705   54452 cri.go:89] found id: ""
	I1206 08:57:08.095722   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.095729   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:08.095736   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:08.095745   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:08.152794   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:08.152812   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:08.163981   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:08.164009   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:08.231637   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:08.223305   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.223828   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225446   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225934   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.227447   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:08.223305   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.223828   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225446   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225934   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.227447   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:08.231648   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:08.231659   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:08.294693   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:08.294710   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:10.824685   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:10.834735   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:10.834797   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:10.861282   54452 cri.go:89] found id: ""
	I1206 08:57:10.861297   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.861304   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:10.861309   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:10.861380   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:10.889560   54452 cri.go:89] found id: ""
	I1206 08:57:10.889573   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.889580   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:10.889585   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:10.889646   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:10.918582   54452 cri.go:89] found id: ""
	I1206 08:57:10.918597   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.918605   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:10.918611   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:10.918677   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:10.945055   54452 cri.go:89] found id: ""
	I1206 08:57:10.945068   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.945075   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:10.945081   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:10.945142   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:10.971779   54452 cri.go:89] found id: ""
	I1206 08:57:10.971807   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.971814   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:10.971820   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:10.971883   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:11.007014   54452 cri.go:89] found id: ""
	I1206 08:57:11.007028   54452 logs.go:282] 0 containers: []
	W1206 08:57:11.007035   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:11.007041   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:11.007103   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:11.033387   54452 cri.go:89] found id: ""
	I1206 08:57:11.033415   54452 logs.go:282] 0 containers: []
	W1206 08:57:11.033422   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:11.033431   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:11.033441   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:11.103950   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:11.094735   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.095599   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097342   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097718   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.099415   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:11.094735   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.095599   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097342   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097718   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.099415   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:11.103962   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:11.103972   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:11.168820   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:11.168839   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:11.199653   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:11.199669   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:11.258665   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:11.258682   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:13.770048   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:13.780437   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:13.780537   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:13.804490   54452 cri.go:89] found id: ""
	I1206 08:57:13.804504   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.804511   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:13.804517   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:13.804576   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:13.828142   54452 cri.go:89] found id: ""
	I1206 08:57:13.828156   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.828163   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:13.828173   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:13.828234   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:13.852993   54452 cri.go:89] found id: ""
	I1206 08:57:13.853006   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.853013   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:13.853017   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:13.853073   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:13.876970   54452 cri.go:89] found id: ""
	I1206 08:57:13.876983   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.876990   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:13.876996   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:13.877057   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:13.906173   54452 cri.go:89] found id: ""
	I1206 08:57:13.906189   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.906196   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:13.906201   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:13.906260   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:13.932656   54452 cri.go:89] found id: ""
	I1206 08:57:13.932670   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.932677   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:13.932682   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:13.932744   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:13.958494   54452 cri.go:89] found id: ""
	I1206 08:57:13.958507   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.958514   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:13.958522   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:13.958533   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:13.969906   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:13.969925   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:14.055494   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:14.045404   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.046095   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.048372   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.049321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.050244   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:14.045404   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.046095   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.048372   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.049321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.050244   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:14.055511   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:14.055523   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:14.119159   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:14.119179   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:14.151907   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:14.151925   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:16.720554   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:16.731520   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:16.731584   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:16.757438   54452 cri.go:89] found id: ""
	I1206 08:57:16.757452   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.757458   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:16.757463   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:16.757520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:16.782537   54452 cri.go:89] found id: ""
	I1206 08:57:16.782552   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.782559   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:16.782564   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:16.782619   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:16.811967   54452 cri.go:89] found id: ""
	I1206 08:57:16.811981   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.811988   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:16.811993   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:16.812051   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:16.840450   54452 cri.go:89] found id: ""
	I1206 08:57:16.840464   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.840471   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:16.840477   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:16.840553   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:16.865953   54452 cri.go:89] found id: ""
	I1206 08:57:16.865968   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.865975   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:16.865981   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:16.866043   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:16.890520   54452 cri.go:89] found id: ""
	I1206 08:57:16.890540   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.890547   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:16.890552   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:16.890611   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:16.915368   54452 cri.go:89] found id: ""
	I1206 08:57:16.915411   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.915418   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:16.915425   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:16.915435   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:16.975773   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:16.975792   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:16.990535   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:16.990557   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:17.060425   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:17.052130   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.052751   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054271   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054603   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.056244   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:17.052130   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.052751   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054271   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054603   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.056244   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:17.060435   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:17.060446   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:17.124040   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:17.124060   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:19.655902   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:19.666330   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:19.666398   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:19.695219   54452 cri.go:89] found id: ""
	I1206 08:57:19.695232   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.695239   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:19.695245   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:19.695309   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:19.720027   54452 cri.go:89] found id: ""
	I1206 08:57:19.720041   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.720048   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:19.720053   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:19.720112   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:19.745773   54452 cri.go:89] found id: ""
	I1206 08:57:19.745787   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.745794   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:19.745799   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:19.745858   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:19.770885   54452 cri.go:89] found id: ""
	I1206 08:57:19.770898   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.770905   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:19.770910   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:19.770970   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:19.797192   54452 cri.go:89] found id: ""
	I1206 08:57:19.797205   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.797212   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:19.797218   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:19.797278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:19.825222   54452 cri.go:89] found id: ""
	I1206 08:57:19.825236   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.825243   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:19.825248   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:19.825314   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:19.855303   54452 cri.go:89] found id: ""
	I1206 08:57:19.855317   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.855324   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:19.855332   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:19.855342   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:19.912412   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:19.912430   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:19.924673   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:19.924689   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:20.010098   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:19.995577   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998398   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998837   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.003925   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.004952   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:19.995577   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998398   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998837   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.003925   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.004952   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:20.010109   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:20.010121   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:20.081433   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:20.081453   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:22.615286   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:22.625653   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:22.625713   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:22.650708   54452 cri.go:89] found id: ""
	I1206 08:57:22.650721   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.650728   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:22.650734   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:22.650793   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:22.675795   54452 cri.go:89] found id: ""
	I1206 08:57:22.675809   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.675816   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:22.675821   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:22.675876   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:22.700140   54452 cri.go:89] found id: ""
	I1206 08:57:22.700153   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.700160   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:22.700165   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:22.700224   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:22.726855   54452 cri.go:89] found id: ""
	I1206 08:57:22.726869   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.726876   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:22.726882   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:22.726938   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:22.751934   54452 cri.go:89] found id: ""
	I1206 08:57:22.751947   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.751954   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:22.751960   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:22.752017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:22.780047   54452 cri.go:89] found id: ""
	I1206 08:57:22.780061   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.780068   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:22.780074   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:22.780132   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:22.804185   54452 cri.go:89] found id: ""
	I1206 08:57:22.804199   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.804206   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:22.804214   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:22.804230   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:22.814840   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:22.814855   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:22.881877   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:22.873545   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.874258   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.875884   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.876440   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.878086   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:22.873545   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.874258   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.875884   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.876440   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.878086   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:22.881887   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:22.881897   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:22.949826   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:22.949846   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:22.990802   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:22.990820   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:25.557401   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:25.567869   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:25.567931   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:25.593044   54452 cri.go:89] found id: ""
	I1206 08:57:25.593058   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.593065   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:25.593070   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:25.593131   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:25.621119   54452 cri.go:89] found id: ""
	I1206 08:57:25.621134   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.621141   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:25.621146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:25.621206   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:25.649977   54452 cri.go:89] found id: ""
	I1206 08:57:25.649991   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.649998   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:25.650003   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:25.650066   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:25.674573   54452 cri.go:89] found id: ""
	I1206 08:57:25.674586   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.674593   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:25.674598   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:25.674654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:25.700412   54452 cri.go:89] found id: ""
	I1206 08:57:25.700425   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.700432   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:25.700438   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:25.700501   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:25.726656   54452 cri.go:89] found id: ""
	I1206 08:57:25.726670   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.726686   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:25.726691   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:25.726760   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:25.751625   54452 cri.go:89] found id: ""
	I1206 08:57:25.751639   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.751646   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:25.751653   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:25.751664   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:25.812914   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:25.804895   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.805687   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807191   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807672   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.809146   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:25.804895   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.805687   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807191   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807672   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.809146   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:25.812924   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:25.812936   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:25.875880   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:25.875898   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:25.905301   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:25.905316   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:25.964301   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:25.964320   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:28.477584   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:28.487626   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:28.487685   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:28.516024   54452 cri.go:89] found id: ""
	I1206 08:57:28.516038   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.516045   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:28.516050   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:28.516109   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:28.542151   54452 cri.go:89] found id: ""
	I1206 08:57:28.542165   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.542172   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:28.542177   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:28.542234   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:28.569963   54452 cri.go:89] found id: ""
	I1206 08:57:28.569977   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.569984   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:28.569989   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:28.570047   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:28.594336   54452 cri.go:89] found id: ""
	I1206 08:57:28.594350   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.594357   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:28.594362   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:28.594421   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:28.620834   54452 cri.go:89] found id: ""
	I1206 08:57:28.620846   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.620854   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:28.620859   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:28.620916   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:28.645672   54452 cri.go:89] found id: ""
	I1206 08:57:28.645686   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.645693   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:28.645698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:28.645762   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:28.670982   54452 cri.go:89] found id: ""
	I1206 08:57:28.670997   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.671004   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:28.671011   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:28.671022   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:28.729216   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:28.729234   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:28.741378   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:28.741394   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:28.808285   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:28.799664   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.800557   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802319   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802654   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.804202   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:28.799664   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.800557   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802319   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802654   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.804202   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:28.808296   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:28.808308   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:28.872187   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:28.872205   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:31.410802   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:31.421507   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:31.421567   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:31.449192   54452 cri.go:89] found id: ""
	I1206 08:57:31.449206   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.449213   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:31.449219   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:31.449278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:31.479043   54452 cri.go:89] found id: ""
	I1206 08:57:31.479057   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.479070   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:31.479075   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:31.479138   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:31.504010   54452 cri.go:89] found id: ""
	I1206 08:57:31.504024   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.504031   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:31.504036   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:31.504094   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:31.529789   54452 cri.go:89] found id: ""
	I1206 08:57:31.529807   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.529818   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:31.529824   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:31.529890   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:31.555332   54452 cri.go:89] found id: ""
	I1206 08:57:31.555346   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.555354   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:31.555359   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:31.555449   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:31.585896   54452 cri.go:89] found id: ""
	I1206 08:57:31.585909   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.585916   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:31.585922   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:31.585980   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:31.610938   54452 cri.go:89] found id: ""
	I1206 08:57:31.610950   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.610958   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:31.610965   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:31.610975   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:31.667535   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:31.667553   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:31.680211   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:31.680234   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:31.750810   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:31.742704   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.743477   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745233   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745766   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.746756   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:31.742704   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.743477   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745233   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745766   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.746756   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:31.750821   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:31.750833   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:31.813960   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:31.813983   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:34.341858   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:34.352097   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:34.352170   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:34.379126   54452 cri.go:89] found id: ""
	I1206 08:57:34.379140   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.379148   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:34.379153   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:34.379211   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:34.404136   54452 cri.go:89] found id: ""
	I1206 08:57:34.404150   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.404158   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:34.404163   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:34.404222   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:34.429318   54452 cri.go:89] found id: ""
	I1206 08:57:34.429333   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.429340   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:34.429346   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:34.429410   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:34.454607   54452 cri.go:89] found id: ""
	I1206 08:57:34.454621   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.454628   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:34.454633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:34.454689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:34.481702   54452 cri.go:89] found id: ""
	I1206 08:57:34.481715   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.481722   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:34.481727   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:34.481786   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:34.506222   54452 cri.go:89] found id: ""
	I1206 08:57:34.506236   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.506242   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:34.506247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:34.506307   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:34.531791   54452 cri.go:89] found id: ""
	I1206 08:57:34.531804   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.531811   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:34.531818   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:34.531829   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:34.542352   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:34.542368   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:34.605646   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:34.597261   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.597943   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.599605   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.600148   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.601815   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:34.597261   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.597943   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.599605   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.600148   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.601815   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:34.605655   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:34.605666   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:34.668800   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:34.668818   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:34.703806   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:34.703822   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:37.265019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:37.275013   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:37.275073   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:37.300683   54452 cri.go:89] found id: ""
	I1206 08:57:37.300696   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.300704   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:37.300710   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:37.300768   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:37.326083   54452 cri.go:89] found id: ""
	I1206 08:57:37.326096   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.326103   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:37.326109   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:37.326169   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:37.354381   54452 cri.go:89] found id: ""
	I1206 08:57:37.354395   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.354402   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:37.354407   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:37.354467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:37.379048   54452 cri.go:89] found id: ""
	I1206 08:57:37.379062   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.379069   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:37.379074   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:37.379132   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:37.407083   54452 cri.go:89] found id: ""
	I1206 08:57:37.407097   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.407104   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:37.407120   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:37.407179   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:37.430756   54452 cri.go:89] found id: ""
	I1206 08:57:37.430769   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.430777   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:37.430782   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:37.430839   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:37.459469   54452 cri.go:89] found id: ""
	I1206 08:57:37.459483   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.459490   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:37.459498   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:37.459510   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:37.470844   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:37.470860   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:37.538783   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:37.530038   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.530744   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.532506   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.533299   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.534867   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:37.530038   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.530744   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.532506   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.533299   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.534867   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:37.538793   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:37.538804   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:37.604935   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:37.604954   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:37.637474   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:37.637491   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:40.195736   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:40.205728   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:40.205790   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:40.242821   54452 cri.go:89] found id: ""
	I1206 08:57:40.242834   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.242841   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:40.242847   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:40.242902   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:40.284606   54452 cri.go:89] found id: ""
	I1206 08:57:40.284620   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.284628   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:40.284633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:40.284689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:40.317256   54452 cri.go:89] found id: ""
	I1206 08:57:40.317270   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.317277   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:40.317282   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:40.317339   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:40.341890   54452 cri.go:89] found id: ""
	I1206 08:57:40.341904   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.341911   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:40.341916   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:40.341971   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:40.365889   54452 cri.go:89] found id: ""
	I1206 08:57:40.365902   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.365909   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:40.365915   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:40.365970   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:40.390366   54452 cri.go:89] found id: ""
	I1206 08:57:40.390379   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.390386   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:40.390393   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:40.390451   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:40.414154   54452 cri.go:89] found id: ""
	I1206 08:57:40.414168   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.414174   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:40.414182   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:40.414192   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:40.425672   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:40.425688   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:40.491793   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:40.479914   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.480484   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485346   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485909   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.487745   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:40.479914   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.480484   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485346   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485909   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.487745   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:40.491804   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:40.491815   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:40.554734   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:40.554754   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:40.585496   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:40.585511   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:43.142927   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:43.152875   54452 kubeadm.go:602] duration metric: took 4m4.203206664s to restartPrimaryControlPlane
	W1206 08:57:43.152943   54452 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 08:57:43.153014   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 08:57:43.558005   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:57:43.571431   54452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:57:43.579298   54452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 08:57:43.579354   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:57:43.587284   54452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 08:57:43.587293   54452 kubeadm.go:158] found existing configuration files:
	
	I1206 08:57:43.587347   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:57:43.595209   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 08:57:43.595263   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 08:57:43.602677   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:57:43.610821   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 08:57:43.610884   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:57:43.618219   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:57:43.625867   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 08:57:43.625922   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:57:43.633373   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:57:43.640818   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 08:57:43.640880   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:57:43.648275   54452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 08:57:43.690498   54452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 08:57:43.690790   54452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 08:57:43.763599   54452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 08:57:43.763663   54452 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 08:57:43.763697   54452 kubeadm.go:319] OS: Linux
	I1206 08:57:43.763740   54452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 08:57:43.763787   54452 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 08:57:43.763833   54452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 08:57:43.763880   54452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 08:57:43.763928   54452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 08:57:43.763975   54452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 08:57:43.764019   54452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 08:57:43.764066   54452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 08:57:43.764112   54452 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 08:57:43.838707   54452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 08:57:43.838810   54452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 08:57:43.838899   54452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 08:57:43.843797   54452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 08:57:43.849166   54452 out.go:252]   - Generating certificates and keys ...
	I1206 08:57:43.849248   54452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 08:57:43.849312   54452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 08:57:43.849386   54452 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 08:57:43.849451   54452 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 08:57:43.849520   54452 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 08:57:43.849572   54452 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 08:57:43.849633   54452 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 08:57:43.849693   54452 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 08:57:43.849766   54452 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 08:57:43.849838   54452 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 08:57:43.849874   54452 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 08:57:43.849928   54452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 08:57:44.005203   54452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 08:57:44.248156   54452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 08:57:44.506601   54452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 08:57:44.747606   54452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 08:57:44.875144   54452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 08:57:44.875922   54452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 08:57:44.878561   54452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 08:57:44.881876   54452 out.go:252]   - Booting up control plane ...
	I1206 08:57:44.881976   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 08:57:44.882052   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 08:57:44.882117   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 08:57:44.902770   54452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 08:57:44.902884   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 08:57:44.910887   54452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 08:57:44.915557   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 08:57:44.915618   54452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 08:57:45.072565   54452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 08:57:45.072679   54452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:01:45.073201   54452 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00139193s
	I1206 09:01:45.073230   54452 kubeadm.go:319] 
	I1206 09:01:45.073292   54452 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:01:45.073325   54452 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:01:45.073460   54452 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:01:45.073475   54452 kubeadm.go:319] 
	I1206 09:01:45.073605   54452 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:01:45.073641   54452 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:01:45.073671   54452 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:01:45.073674   54452 kubeadm.go:319] 
	I1206 09:01:45.079541   54452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:01:45.080019   54452 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:01:45.080137   54452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:01:45.080372   54452 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 09:01:45.080377   54452 kubeadm.go:319] 
	W1206 09:01:45.080611   54452 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00139193s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:01:45.080716   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:01:45.081059   54452 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:01:45.527784   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:01:45.541714   54452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:01:45.541768   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:01:45.549724   54452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:01:45.549735   54452 kubeadm.go:158] found existing configuration files:
	
	I1206 09:01:45.549787   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 09:01:45.557657   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:01:45.557710   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:01:45.565116   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 09:01:45.572963   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:01:45.573017   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:01:45.580604   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 09:01:45.588212   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:01:45.588267   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:01:45.595779   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 09:01:45.604082   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:01:45.604137   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:01:45.612084   54452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:01:45.650374   54452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:01:45.650428   54452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:01:45.720642   54452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:01:45.720706   54452 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:01:45.720740   54452 kubeadm.go:319] OS: Linux
	I1206 09:01:45.720783   54452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:01:45.720831   54452 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:01:45.720876   54452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:01:45.720923   54452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:01:45.720970   54452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:01:45.721017   54452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:01:45.721061   54452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:01:45.721107   54452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:01:45.721153   54452 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:01:45.786361   54452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:01:45.786476   54452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:01:45.786571   54452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:01:45.791901   54452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:01:45.795433   54452 out.go:252]   - Generating certificates and keys ...
	I1206 09:01:45.795514   54452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:01:45.795578   54452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:01:45.795654   54452 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 09:01:45.795714   54452 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 09:01:45.795783   54452 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 09:01:45.795835   54452 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 09:01:45.795898   54452 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 09:01:45.795958   54452 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 09:01:45.796032   54452 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 09:01:45.796104   54452 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 09:01:45.796185   54452 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 09:01:45.796240   54452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:01:45.935718   54452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:01:46.055895   54452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:01:46.294260   54452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:01:46.619812   54452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:01:46.778456   54452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:01:46.779211   54452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:01:46.782067   54452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:01:46.785434   54452 out.go:252]   - Booting up control plane ...
	I1206 09:01:46.785536   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:01:46.785617   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:01:46.785688   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:01:46.805726   54452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:01:46.805831   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:01:46.814430   54452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:01:46.816546   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:01:46.816591   54452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:01:46.952811   54452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:01:46.952924   54452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:05:46.951725   54452 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00022284s
	I1206 09:05:46.951748   54452 kubeadm.go:319] 
	I1206 09:05:46.951804   54452 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:05:46.951836   54452 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:05:46.951939   54452 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:05:46.951944   54452 kubeadm.go:319] 
	I1206 09:05:46.952047   54452 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:05:46.952078   54452 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:05:46.952108   54452 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:05:46.952111   54452 kubeadm.go:319] 
	I1206 09:05:46.956655   54452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:05:46.957065   54452 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:05:46.957172   54452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:05:46.957405   54452 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 09:05:46.957409   54452 kubeadm.go:319] 
	I1206 09:05:46.957479   54452 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:05:46.957537   54452 kubeadm.go:403] duration metric: took 12m8.043807841s to StartCluster
	I1206 09:05:46.957567   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:05:46.957632   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:05:47.005263   54452 cri.go:89] found id: ""
	I1206 09:05:47.005276   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.005284   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 09:05:47.005289   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:05:47.005348   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:05:47.039824   54452 cri.go:89] found id: ""
	I1206 09:05:47.039837   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.039844   54452 logs.go:284] No container was found matching "etcd"
	I1206 09:05:47.039849   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:05:47.039907   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:05:47.069199   54452 cri.go:89] found id: ""
	I1206 09:05:47.069215   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.069222   54452 logs.go:284] No container was found matching "coredns"
	I1206 09:05:47.069228   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:05:47.069290   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:05:47.094120   54452 cri.go:89] found id: ""
	I1206 09:05:47.094134   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.094141   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 09:05:47.094146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:05:47.094204   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:05:47.117873   54452 cri.go:89] found id: ""
	I1206 09:05:47.117887   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.117895   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:05:47.117900   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:05:47.117957   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:05:47.141782   54452 cri.go:89] found id: ""
	I1206 09:05:47.141796   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.141803   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 09:05:47.141809   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:05:47.141869   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:05:47.167265   54452 cri.go:89] found id: ""
	I1206 09:05:47.167280   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.167287   54452 logs.go:284] No container was found matching "kindnet"
	I1206 09:05:47.167295   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 09:05:47.167314   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:05:47.224071   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 09:05:47.224090   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:05:47.235798   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:05:47.235814   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:05:47.303156   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:05:47.295299   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.295956   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297451   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297881   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.299336   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 09:05:47.295299   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.295956   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297451   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297881   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.299336   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:05:47.303181   54452 logs.go:123] Gathering logs for containerd ...
	I1206 09:05:47.303191   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:05:47.366843   54452 logs.go:123] Gathering logs for container status ...
	I1206 09:05:47.366863   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 09:05:47.396270   54452 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 09:05:47.396302   54452 out.go:285] * 
	W1206 09:05:47.396359   54452 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:05:47.396374   54452 out.go:285] * 
	W1206 09:05:47.398505   54452 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:05:47.405628   54452 out.go:203] 
	W1206 09:05:47.408588   54452 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:05:47.408634   54452 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 09:05:47.408679   54452 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 09:05:47.411976   54452 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948296356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948312964Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948379313Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948412085Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948441403Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948462491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948482866Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948510698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948529111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948562713Z" level=info msg="Connect containerd service"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948903673Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.949608593Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967402561Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967484402Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967550135Z" level=info msg="Start subscribing containerd event"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967692902Z" level=info msg="Start recovering state"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019042107Z" level=info msg="Start event monitor"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019110196Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019122955Z" level=info msg="Start streaming server"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019132310Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019140786Z" level=info msg="runtime interface starting up..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019147531Z" level=info msg="starting plugins..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019160085Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 08:53:37 functional-090986 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.020711198Z" level=info msg="containerd successfully booted in 0.094795s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:05:51.056794   21226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:51.057365   21226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:51.059201   21226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:51.059792   21226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:51.061422   21226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 09:05:51 up 48 min,  0 user,  load average: 0.06, 0.18, 0.35
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 09:05:47 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:05:48 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 09:05:48 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:48 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:48 functional-090986 kubelet[21056]: E1206 09:05:48.526350   21056 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:05:48 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:05:48 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:05:49 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 06 09:05:49 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:49 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:49 functional-090986 kubelet[21099]: E1206 09:05:49.308414   21099 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:05:49 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:05:49 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:05:49 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 06 09:05:49 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:49 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:50 functional-090986 kubelet[21119]: E1206 09:05:50.062941   21119 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:05:50 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:05:50 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:05:50 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 06 09:05:50 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:50 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:05:50 functional-090986 kubelet[21147]: E1206 09:05:50.792344   21147 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:05:50 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:05:50 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (342.731588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-090986 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-090986 apply -f testdata/invalidsvc.yaml: exit status 1 (59.101799ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-090986 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-090986 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-090986 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-090986 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-090986 --alsologtostderr -v=1] stderr:
I1206 09:07:41.978590   71734 out.go:360] Setting OutFile to fd 1 ...
I1206 09:07:41.978719   71734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:41.978728   71734 out.go:374] Setting ErrFile to fd 2...
I1206 09:07:41.978732   71734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:41.979018   71734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 09:07:41.979294   71734 mustload.go:66] Loading cluster: functional-090986
I1206 09:07:41.979756   71734 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:41.980218   71734 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 09:07:42.007585   71734 host.go:66] Checking if "functional-090986" exists ...
I1206 09:07:42.007986   71734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 09:07:42.069210   71734 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:42.058193698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 09:07:42.069355   71734 api_server.go:166] Checking apiserver status ...
I1206 09:07:42.069430   71734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 09:07:42.069500   71734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 09:07:42.096961   71734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
W1206 09:07:42.214771   71734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 09:07:42.218120   71734 out.go:179] * The control-plane node functional-090986 apiserver is not running: (state=Stopped)
I1206 09:07:42.221177   71734 out.go:179]   To start a cluster, run: "minikube start -p functional-090986"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (327.231911ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-090986 service hello-node --url                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh       │ functional-090986 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount     │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001:/mount-9p --alsologtostderr -v=1              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh       │ functional-090986 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh -- ls -la /mount-9p                                                                                                           │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh cat /mount-9p/test-1765012051876100393                                                                                        │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh       │ functional-090986 ssh sudo umount -f /mount-9p                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount     │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2232204122/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh       │ functional-090986 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh -- ls -la /mount-9p                                                                                                           │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh sudo umount -f /mount-9p                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount     │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount1 --alsologtostderr -v=1                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh       │ functional-090986 ssh findmnt -T /mount1                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount     │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount2 --alsologtostderr -v=1                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount     │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount3 --alsologtostderr -v=1                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh       │ functional-090986 ssh findmnt -T /mount1                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh findmnt -T /mount2                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh findmnt -T /mount3                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ mount     │ -p functional-090986 --kill=true                                                                                                                    │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ start     │ -p functional-090986 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ start     │ -p functional-090986 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ start     │ -p functional-090986 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-090986 --alsologtostderr -v=1                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:07:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:07:41.725234   71654 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:07:41.725495   71654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.725523   71654 out.go:374] Setting ErrFile to fd 2...
	I1206 09:07:41.725565   71654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.726047   71654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:07:41.726648   71654 out.go:368] Setting JSON to false
	I1206 09:07:41.727841   71654 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3013,"bootTime":1765009049,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:07:41.727988   71654 start.go:143] virtualization:  
	I1206 09:07:41.731273   71654 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:07:41.735271   71654 notify.go:221] Checking for updates...
	I1206 09:07:41.736168   71654 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:07:41.739723   71654 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:07:41.742669   71654 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:07:41.745442   71654 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:07:41.748505   71654 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:07:41.751454   71654 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:07:41.754797   71654 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:41.755467   71654 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:07:41.792402   71654 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:07:41.792535   71654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:41.854255   71654 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:41.844749592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:07:41.854367   71654 docker.go:319] overlay module found
	I1206 09:07:41.857426   71654 out.go:179] * Using the docker driver based on existing profile
	I1206 09:07:41.860347   71654 start.go:309] selected driver: docker
	I1206 09:07:41.860369   71654 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:41.860483   71654 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:07:41.860587   71654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:41.915317   71654 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:41.905871138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:07:41.915806   71654 cni.go:84] Creating CNI manager for ""
	I1206 09:07:41.915877   71654 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:07:41.915925   71654 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:41.918972   71654 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948296356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948312964Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948379313Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948412085Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948441403Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948462491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948482866Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948510698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948529111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948562713Z" level=info msg="Connect containerd service"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948903673Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.949608593Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967402561Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967484402Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967550135Z" level=info msg="Start subscribing containerd event"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967692902Z" level=info msg="Start recovering state"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019042107Z" level=info msg="Start event monitor"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019110196Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019122955Z" level=info msg="Start streaming server"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019132310Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019140786Z" level=info msg="runtime interface starting up..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019147531Z" level=info msg="starting plugins..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019160085Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 08:53:37 functional-090986 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.020711198Z" level=info msg="containerd successfully booted in 0.094795s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:07:43.283238   23203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:43.285631   23203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:43.286452   23203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:43.288049   23203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:43.288516   23203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 09:07:43 up 50 min,  0 user,  load average: 1.76, 0.57, 0.46
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 09:07:40 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:40 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 472.
	Dec 06 09:07:40 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:40 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:41 functional-090986 kubelet[23069]: E1206 09:07:41.060190   23069 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:41 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:41 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:41 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 473.
	Dec 06 09:07:41 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:41 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:41 functional-090986 kubelet[23090]: E1206 09:07:41.781134   23090 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:41 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:41 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:42 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 06 09:07:42 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:42 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:42 functional-090986 kubelet[23105]: E1206 09:07:42.539400   23105 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:42 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:42 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:43 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 06 09:07:43 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:43 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:43 functional-090986 kubelet[23202]: E1206 09:07:43.285065   23202 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:43 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:43 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (298.022946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 status: exit status 2 (315.688998ms)

                                                
                                                
-- stdout --
	functional-090986
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-090986 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (310.241674ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-090986 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 status -o json: exit status 2 (326.272687ms)

                                                
                                                
-- stdout --
	{"Name":"functional-090986","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-090986 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (347.764715ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-090986 service list                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ service │ functional-090986 service list -o json                                                                                                              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ service │ functional-090986 service --namespace=default --https --url hello-node                                                                              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ service │ functional-090986 service hello-node --url --format={{.IP}}                                                                                         │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ service │ functional-090986 service hello-node --url                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh     │ functional-090986 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount   │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001:/mount-9p --alsologtostderr -v=1              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh     │ functional-090986 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ functional-090986 ssh -- ls -la /mount-9p                                                                                                           │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ functional-090986 ssh cat /mount-9p/test-1765012051876100393                                                                                        │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ functional-090986 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh     │ functional-090986 ssh sudo umount -f /mount-9p                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ functional-090986 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount   │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2232204122/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh     │ functional-090986 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ functional-090986 ssh -- ls -la /mount-9p                                                                                                           │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ functional-090986 ssh sudo umount -f /mount-9p                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount   │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount1 --alsologtostderr -v=1                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh     │ functional-090986 ssh findmnt -T /mount1                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount   │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount2 --alsologtostderr -v=1                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount   │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount3 --alsologtostderr -v=1                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh     │ functional-090986 ssh findmnt -T /mount1                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ functional-090986 ssh findmnt -T /mount2                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh     │ functional-090986 ssh findmnt -T /mount3                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ mount   │ -p functional-090986 --kill=true                                                                                                                    │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:53:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:53:33.876279   54452 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:53:33.876426   54452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:53:33.876430   54452 out.go:374] Setting ErrFile to fd 2...
	I1206 08:53:33.876434   54452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:53:33.876677   54452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:53:33.877013   54452 out.go:368] Setting JSON to false
	I1206 08:53:33.877825   54452 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2165,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:53:33.877882   54452 start.go:143] virtualization:  
	I1206 08:53:33.881239   54452 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:53:33.885112   54452 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:53:33.885177   54452 notify.go:221] Checking for updates...
	I1206 08:53:33.891576   54452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:53:33.894372   54452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:53:33.897142   54452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:53:33.900076   54452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:53:33.902894   54452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:53:33.906249   54452 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:53:33.906348   54452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:53:33.928682   54452 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:53:33.928770   54452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:53:33.993741   54452 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 08:53:33.983085793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:53:33.993843   54452 docker.go:319] overlay module found
	I1206 08:53:33.999105   54452 out.go:179] * Using the docker driver based on existing profile
	I1206 08:53:34.002148   54452 start.go:309] selected driver: docker
	I1206 08:53:34.002159   54452 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:34.002241   54452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:53:34.002360   54452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:53:34.059754   54452 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 08:53:34.048620994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:53:34.060212   54452 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:53:34.060235   54452 cni.go:84] Creating CNI manager for ""
	I1206 08:53:34.060282   54452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:53:34.060330   54452 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:34.065569   54452 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:53:34.068398   54452 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:53:34.071322   54452 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:53:34.074275   54452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:53:34.074316   54452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:53:34.074325   54452 cache.go:65] Caching tarball of preloaded images
	I1206 08:53:34.074364   54452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:53:34.074457   54452 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:53:34.074467   54452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:53:34.074577   54452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:53:34.094292   54452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:53:34.094303   54452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:53:34.094322   54452 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:53:34.094352   54452 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:53:34.094428   54452 start.go:364] duration metric: took 60.843µs to acquireMachinesLock for "functional-090986"
	I1206 08:53:34.094446   54452 start.go:96] Skipping create...Using existing machine configuration
	I1206 08:53:34.094451   54452 fix.go:54] fixHost starting: 
	I1206 08:53:34.094714   54452 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:53:34.110952   54452 fix.go:112] recreateIfNeeded on functional-090986: state=Running err=<nil>
	W1206 08:53:34.110973   54452 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 08:53:34.114350   54452 out.go:252] * Updating the running docker "functional-090986" container ...
	I1206 08:53:34.114380   54452 machine.go:94] provisionDockerMachine start ...
	I1206 08:53:34.114470   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.132110   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.132436   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.132441   54452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:53:34.290732   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:53:34.290745   54452 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:53:34.290806   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.309786   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.310075   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.310083   54452 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:53:34.468771   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:53:34.468838   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.492421   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.492726   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.492743   54452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:53:34.643743   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:53:34.643757   54452 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:53:34.643785   54452 ubuntu.go:190] setting up certificates
	I1206 08:53:34.643793   54452 provision.go:84] configureAuth start
	I1206 08:53:34.643849   54452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:53:34.661031   54452 provision.go:143] copyHostCerts
	I1206 08:53:34.661090   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:53:34.661103   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:53:34.661173   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:53:34.661279   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:53:34.661283   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:53:34.661307   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:53:34.661364   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:53:34.661367   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:53:34.661387   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:53:34.661440   54452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:53:35.261601   54452 provision.go:177] copyRemoteCerts
	I1206 08:53:35.261659   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:53:35.261707   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.278502   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.383098   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:53:35.400343   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:53:35.417458   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 08:53:35.434271   54452 provision.go:87] duration metric: took 790.45575ms to configureAuth
	I1206 08:53:35.434289   54452 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:53:35.434485   54452 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:53:35.434491   54452 machine.go:97] duration metric: took 1.320106202s to provisionDockerMachine
	I1206 08:53:35.434498   54452 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:53:35.434507   54452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:53:35.434552   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:53:35.434601   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.452073   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.559110   54452 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:53:35.562282   54452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:53:35.562301   54452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:53:35.562313   54452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:53:35.562372   54452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:53:35.562453   54452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:53:35.562529   54452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:53:35.562578   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:53:35.569704   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:53:35.586692   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:53:35.603733   54452 start.go:296] duration metric: took 169.221467ms for postStartSetup
	I1206 08:53:35.603809   54452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:53:35.603847   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.620625   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.725607   54452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:53:35.730716   54452 fix.go:56] duration metric: took 1.636258463s for fixHost
	I1206 08:53:35.730732   54452 start.go:83] releasing machines lock for "functional-090986", held for 1.636296668s
	I1206 08:53:35.730797   54452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:53:35.748170   54452 ssh_runner.go:195] Run: cat /version.json
	I1206 08:53:35.748211   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.748450   54452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:53:35.748491   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.780618   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.788438   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.895097   54452 ssh_runner.go:195] Run: systemctl --version
	I1206 08:53:35.994868   54452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 08:53:36.000428   54452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:53:36.000495   54452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:53:36.008950   54452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 08:53:36.008964   54452 start.go:496] detecting cgroup driver to use...
	I1206 08:53:36.008997   54452 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:53:36.009046   54452 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:53:36.024586   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:53:36.037573   54452 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:53:36.037628   54452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:53:36.053442   54452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:53:36.066493   54452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:53:36.187062   54452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:53:36.308311   54452 docker.go:234] disabling docker service ...
	I1206 08:53:36.308366   54452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:53:36.324390   54452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:53:36.337942   54452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:53:36.464363   54452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:53:36.601173   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:53:36.614787   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:53:36.630199   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:53:36.639943   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:53:36.649262   54452 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:53:36.649336   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:53:36.657952   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:53:36.666666   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:53:36.675637   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:53:36.684412   54452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:53:36.692740   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:53:36.701838   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:53:36.712344   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:53:36.721508   54452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:53:36.729269   54452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:53:36.736851   54452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:53:36.864978   54452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:53:37.021054   54452 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:53:37.021112   54452 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:53:37.025377   54452 start.go:564] Will wait 60s for crictl version
	I1206 08:53:37.025433   54452 ssh_runner.go:195] Run: which crictl
	I1206 08:53:37.029231   54452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:53:37.053402   54452 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:53:37.053462   54452 ssh_runner.go:195] Run: containerd --version
	I1206 08:53:37.077672   54452 ssh_runner.go:195] Run: containerd --version
	I1206 08:53:37.104087   54452 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:53:37.107051   54452 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:53:37.126470   54452 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:53:37.133471   54452 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 08:53:37.136362   54452 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:53:37.136495   54452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:53:37.136575   54452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:53:37.161065   54452 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:53:37.161078   54452 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:53:37.161139   54452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:53:37.189850   54452 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:53:37.189861   54452 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:53:37.189866   54452 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:53:37.189968   54452 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:53:37.190042   54452 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:53:37.215125   54452 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 08:53:37.215146   54452 cni.go:84] Creating CNI manager for ""
	I1206 08:53:37.215156   54452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:53:37.215169   54452 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:53:37.215191   54452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:53:37.215303   54452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:53:37.215394   54452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:53:37.223611   54452 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:53:37.223674   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:53:37.231742   54452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:53:37.245618   54452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:53:37.258873   54452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1206 08:53:37.272656   54452 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:53:37.277122   54452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:53:37.404546   54452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:53:38.220934   54452 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:53:38.220945   54452 certs.go:195] generating shared ca certs ...
	I1206 08:53:38.220959   54452 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:53:38.221099   54452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:53:38.221148   54452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:53:38.221154   54452 certs.go:257] generating profile certs ...
	I1206 08:53:38.221235   54452 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:53:38.221287   54452 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:53:38.221325   54452 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:53:38.221433   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:53:38.221466   54452 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:53:38.221473   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:53:38.221504   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:53:38.221527   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:53:38.221551   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:53:38.221601   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:53:38.222193   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:53:38.247995   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:53:38.268014   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:53:38.289184   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:53:38.308825   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:53:38.326629   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:53:38.344198   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:53:38.361819   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:53:38.379442   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:53:38.397025   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:53:38.414583   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:53:38.432182   54452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:53:38.444938   54452 ssh_runner.go:195] Run: openssl version
	I1206 08:53:38.451220   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.458796   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:53:38.466335   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.470195   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.470251   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.511660   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:53:38.520107   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.527562   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:53:38.535252   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.539202   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.539257   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.580913   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:53:38.589267   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.596722   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:53:38.604956   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.609011   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.609077   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.654662   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:53:38.662094   54452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:53:38.666110   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 08:53:38.707066   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 08:53:38.748028   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 08:53:38.790291   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 08:53:38.831326   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 08:53:38.872506   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 08:53:38.913738   54452 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:38.913828   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:53:38.913894   54452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:53:38.941817   54452 cri.go:89] found id: ""
	I1206 08:53:38.941888   54452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:53:38.949650   54452 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 08:53:38.949660   54452 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 08:53:38.949712   54452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 08:53:38.957046   54452 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:38.957552   54452 kubeconfig.go:125] found "functional-090986" server: "https://192.168.49.2:8441"
	I1206 08:53:38.960001   54452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 08:53:38.973807   54452 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 08:39:02.953222088 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 08:53:37.265532344 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 08:53:38.973835   54452 kubeadm.go:1161] stopping kube-system containers ...
	I1206 08:53:38.973855   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 08:53:38.973990   54452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:53:39.006630   54452 cri.go:89] found id: ""
	I1206 08:53:39.006691   54452 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 08:53:39.027188   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:53:39.035115   54452 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  6 08:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 08:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 08:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  6 08:43 /etc/kubernetes/scheduler.conf
	
	I1206 08:53:39.035195   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:53:39.043346   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:53:39.051128   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.051184   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:53:39.058808   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:53:39.066431   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.066486   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:53:39.074261   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:53:39.082004   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.082060   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:53:39.089693   54452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:53:39.097973   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:39.144114   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.034967   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.247090   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.303335   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.358218   54452 api_server.go:52] waiting for apiserver process to appear ...
	I1206 08:53:40.358284   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:40.858753   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:41.358700   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:41.858760   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:42.359143   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:42.859214   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:43.358859   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:43.858475   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:44.358512   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:44.859201   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:45.358789   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:45.858829   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:46.358595   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:46.858465   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:47.358809   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:47.858516   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:48.358367   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:48.859203   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:49.359207   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:49.858491   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:50.359361   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:50.859136   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:51.358696   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:51.858427   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:52.358504   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:52.858356   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:53.359243   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:53.859142   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:54.359242   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:54.859316   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:55.359059   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:55.858609   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:56.359350   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:56.859078   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:57.359214   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:57.859097   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:58.359174   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:58.858946   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:59.358533   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:59.859078   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:00.358576   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:00.859407   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:01.358874   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:01.858512   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:02.358441   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:02.858517   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:03.359363   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:03.859400   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:04.359276   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:04.859156   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:05.358974   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:05.858357   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:06.359182   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:06.859168   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:07.359160   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:07.859209   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:08.359310   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:08.859102   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:09.358600   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:09.859219   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:10.359034   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:10.858816   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:11.358429   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:11.858433   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:12.359162   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:12.859196   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:13.358899   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:13.858468   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:14.359028   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:14.858481   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:15.359221   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:15.858792   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:16.358493   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:16.859448   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:17.359360   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:17.859153   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:18.358389   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:18.859216   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:19.359289   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:19.858488   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:20.359257   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:20.859245   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:21.359184   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:21.859040   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:22.358496   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:22.859325   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:23.358553   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:23.858649   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:24.358999   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:24.858487   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:25.359321   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:25.859061   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:26.358793   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:26.858844   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:27.358536   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:27.859274   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:28.359019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:28.858738   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:29.359019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:29.858548   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:30.358369   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:30.859081   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:31.359088   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:31.858895   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:32.359444   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:32.859328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:33.359199   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:33.858413   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:34.358493   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:34.858487   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:35.359338   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:35.858497   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:36.358475   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:36.858480   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:37.359209   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:37.858485   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:38.359088   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:38.858716   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:39.358992   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:39.859022   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:40.358688   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:40.358791   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:40.388106   54452 cri.go:89] found id: ""
	I1206 08:54:40.388120   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.388134   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:40.388140   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:40.388201   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:40.412432   54452 cri.go:89] found id: ""
	I1206 08:54:40.412446   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.412453   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:40.412458   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:40.412515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:40.436247   54452 cri.go:89] found id: ""
	I1206 08:54:40.436261   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.436268   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:40.436274   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:40.436334   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:40.461648   54452 cri.go:89] found id: ""
	I1206 08:54:40.461662   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.461669   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:40.461674   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:40.461731   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:40.490826   54452 cri.go:89] found id: ""
	I1206 08:54:40.490840   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.490846   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:40.490851   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:40.490912   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:40.517246   54452 cri.go:89] found id: ""
	I1206 08:54:40.517259   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.517266   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:40.517272   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:40.517331   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:40.542129   54452 cri.go:89] found id: ""
	I1206 08:54:40.542144   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.542150   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:40.542157   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:40.542167   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:40.599816   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:40.599836   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:40.610692   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:40.610709   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:40.681214   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:40.671721   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673072   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673914   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.675628   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.676278   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:40.671721   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673072   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673914   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.675628   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.676278   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:40.681229   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:40.681240   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:40.746611   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:40.746631   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:43.275588   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:43.286822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:43.286894   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:43.313760   54452 cri.go:89] found id: ""
	I1206 08:54:43.313779   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.313786   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:43.313793   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:43.313852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:43.338174   54452 cri.go:89] found id: ""
	I1206 08:54:43.338188   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.338203   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:43.338208   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:43.338278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:43.362249   54452 cri.go:89] found id: ""
	I1206 08:54:43.362263   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.362270   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:43.362275   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:43.362333   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:43.386332   54452 cri.go:89] found id: ""
	I1206 08:54:43.386345   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.386353   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:43.386358   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:43.386413   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:43.413265   54452 cri.go:89] found id: ""
	I1206 08:54:43.413278   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.413285   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:43.413290   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:43.413346   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:43.437411   54452 cri.go:89] found id: ""
	I1206 08:54:43.437424   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.437431   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:43.437436   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:43.437497   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:43.463006   54452 cri.go:89] found id: ""
	I1206 08:54:43.463019   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.463046   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:43.463054   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:43.463065   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:43.531909   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:43.523361   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.524077   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525611   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525984   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.527554   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:43.523361   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.524077   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525611   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525984   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.527554   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:43.531920   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:43.531930   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:43.596428   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:43.596447   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:43.625653   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:43.625669   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:43.685656   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:43.685675   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:46.197048   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:46.207403   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:46.207468   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:46.259332   54452 cri.go:89] found id: ""
	I1206 08:54:46.259345   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.259361   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:46.259367   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:46.259453   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:46.293591   54452 cri.go:89] found id: ""
	I1206 08:54:46.293604   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.293611   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:46.293616   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:46.293674   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:46.324320   54452 cri.go:89] found id: ""
	I1206 08:54:46.324333   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.324340   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:46.324345   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:46.324403   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:46.349505   54452 cri.go:89] found id: ""
	I1206 08:54:46.349519   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.349526   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:46.349531   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:46.349592   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:46.372944   54452 cri.go:89] found id: ""
	I1206 08:54:46.372958   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.372965   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:46.372970   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:46.373028   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:46.397863   54452 cri.go:89] found id: ""
	I1206 08:54:46.397876   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.397884   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:46.397889   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:46.397947   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:46.423405   54452 cri.go:89] found id: ""
	I1206 08:54:46.423419   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.423426   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:46.423434   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:46.423444   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:46.479557   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:46.479577   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:46.490975   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:46.490992   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:46.555476   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:46.546289   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.547116   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.548919   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.549655   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.551369   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:46.546289   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.547116   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.548919   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.549655   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.551369   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:46.555486   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:46.555499   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:46.617650   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:46.617666   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:49.145146   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:49.156935   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:49.157011   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:49.181313   54452 cri.go:89] found id: ""
	I1206 08:54:49.181327   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.181334   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:49.181339   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:49.181396   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:49.205770   54452 cri.go:89] found id: ""
	I1206 08:54:49.205783   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.205792   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:49.205797   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:49.205854   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:49.246208   54452 cri.go:89] found id: ""
	I1206 08:54:49.246232   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.246240   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:49.246245   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:49.246312   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:49.276707   54452 cri.go:89] found id: ""
	I1206 08:54:49.276720   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.276739   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:49.276744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:49.276817   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:49.304665   54452 cri.go:89] found id: ""
	I1206 08:54:49.304684   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.304691   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:49.304696   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:49.304754   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:49.329874   54452 cri.go:89] found id: ""
	I1206 08:54:49.329888   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.329895   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:49.329901   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:49.329967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:49.355459   54452 cri.go:89] found id: ""
	I1206 08:54:49.355473   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.355480   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:49.355487   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:49.355503   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:49.383334   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:49.383349   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:49.438134   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:49.438151   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:49.449298   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:49.449313   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:49.517360   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:49.507622   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.508394   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510126   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510650   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.512155   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:49.507622   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.508394   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510126   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510650   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.512155   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:49.517370   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:49.517380   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:52.080828   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:52.091103   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:52.091181   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:52.116535   54452 cri.go:89] found id: ""
	I1206 08:54:52.116549   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.116556   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:52.116570   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:52.116633   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:52.142398   54452 cri.go:89] found id: ""
	I1206 08:54:52.142412   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.142424   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:52.142429   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:52.142485   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:52.169937   54452 cri.go:89] found id: ""
	I1206 08:54:52.169951   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.169958   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:52.169963   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:52.170020   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:52.200818   54452 cri.go:89] found id: ""
	I1206 08:54:52.200832   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.200838   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:52.200843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:52.200899   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:52.228819   54452 cri.go:89] found id: ""
	I1206 08:54:52.228833   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.228841   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:52.228846   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:52.228908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:52.258951   54452 cri.go:89] found id: ""
	I1206 08:54:52.258964   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.258972   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:52.258977   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:52.259042   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:52.294986   54452 cri.go:89] found id: ""
	I1206 08:54:52.295000   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.295007   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:52.295015   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:52.295025   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:52.362225   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:52.362245   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:52.389713   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:52.389729   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:52.445119   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:52.445137   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:52.458958   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:52.458980   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:52.523486   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:52.514851   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.515698   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517261   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517893   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.519458   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:52.514851   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.515698   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517261   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517893   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.519458   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:55.023766   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:55.034751   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:55.034820   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:55.060938   54452 cri.go:89] found id: ""
	I1206 08:54:55.060952   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.060960   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:55.060965   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:55.061025   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:55.086352   54452 cri.go:89] found id: ""
	I1206 08:54:55.086365   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.086383   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:55.086389   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:55.086457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:55.111318   54452 cri.go:89] found id: ""
	I1206 08:54:55.111334   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.111341   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:55.111346   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:55.111427   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:55.140103   54452 cri.go:89] found id: ""
	I1206 08:54:55.140118   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.140125   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:55.140130   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:55.140194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:55.164478   54452 cri.go:89] found id: ""
	I1206 08:54:55.164492   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.164500   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:55.164505   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:55.164565   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:55.191182   54452 cri.go:89] found id: ""
	I1206 08:54:55.191195   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.191203   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:55.191209   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:55.191266   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:55.216083   54452 cri.go:89] found id: ""
	I1206 08:54:55.216097   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.216104   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:55.216111   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:55.216122   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:55.303982   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:55.294944   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.295756   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.297492   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.298117   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.299945   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:55.294944   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.295756   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.297492   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.298117   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.299945   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:55.303992   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:55.304003   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:55.365857   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:55.365875   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:55.393911   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:55.393928   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:55.455110   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:55.455129   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:57.967188   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:57.977408   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:57.977467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:58.003574   54452 cri.go:89] found id: ""
	I1206 08:54:58.003588   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.003596   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:58.003601   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:58.003662   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:58.029323   54452 cri.go:89] found id: ""
	I1206 08:54:58.029337   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.029344   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:58.029348   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:58.029408   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:58.054996   54452 cri.go:89] found id: ""
	I1206 08:54:58.055010   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.055018   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:58.055023   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:58.055087   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:58.079698   54452 cri.go:89] found id: ""
	I1206 08:54:58.079711   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.079718   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:58.079723   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:58.079785   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:58.106383   54452 cri.go:89] found id: ""
	I1206 08:54:58.106396   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.106403   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:58.106408   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:58.106467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:58.135301   54452 cri.go:89] found id: ""
	I1206 08:54:58.135315   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.135325   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:58.135330   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:58.135431   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:58.165240   54452 cri.go:89] found id: ""
	I1206 08:54:58.165255   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.165262   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:58.165269   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:58.165279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:58.176468   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:58.176483   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:58.263783   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:58.246836   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.247297   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.255628   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.256461   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.259475   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:58.246836   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.247297   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.255628   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.256461   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.259475   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:58.263793   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:58.263806   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:58.336059   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:58.336078   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:58.364550   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:58.364565   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:00.926395   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:00.936607   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:00.936669   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:00.961767   54452 cri.go:89] found id: ""
	I1206 08:55:00.961781   54452 logs.go:282] 0 containers: []
	W1206 08:55:00.961788   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:00.961793   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:00.961855   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:00.987655   54452 cri.go:89] found id: ""
	I1206 08:55:00.987671   54452 logs.go:282] 0 containers: []
	W1206 08:55:00.987678   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:00.987684   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:00.987753   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:01.017321   54452 cri.go:89] found id: ""
	I1206 08:55:01.017335   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.017342   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:01.017347   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:01.017405   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:01.043120   54452 cri.go:89] found id: ""
	I1206 08:55:01.043134   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.043140   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:01.043146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:01.043208   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:01.069934   54452 cri.go:89] found id: ""
	I1206 08:55:01.069951   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.069958   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:01.069967   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:01.070037   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:01.095743   54452 cri.go:89] found id: ""
	I1206 08:55:01.095757   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.095765   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:01.095772   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:01.095832   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:01.120915   54452 cri.go:89] found id: ""
	I1206 08:55:01.120933   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.120940   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:01.120948   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:01.120958   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:01.179366   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:01.179392   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:01.191802   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:01.191818   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:01.292667   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:01.282943   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284116   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284837   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.286639   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.287228   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:01.282943   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284116   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284837   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.286639   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.287228   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:01.292676   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:01.292687   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:01.357710   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:01.357729   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:03.889702   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:03.900135   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:03.900194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:03.926097   54452 cri.go:89] found id: ""
	I1206 08:55:03.926122   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.926129   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:03.926135   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:03.926204   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:03.950796   54452 cri.go:89] found id: ""
	I1206 08:55:03.950810   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.950818   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:03.950823   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:03.950881   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:03.976998   54452 cri.go:89] found id: ""
	I1206 08:55:03.977012   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.977018   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:03.977024   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:03.977083   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:04.004847   54452 cri.go:89] found id: ""
	I1206 08:55:04.004862   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.004870   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:04.004876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:04.004943   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:04.030715   54452 cri.go:89] found id: ""
	I1206 08:55:04.030729   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.030737   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:04.030742   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:04.030806   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:04.056324   54452 cri.go:89] found id: ""
	I1206 08:55:04.056338   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.056345   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:04.056351   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:04.056412   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:04.082124   54452 cri.go:89] found id: ""
	I1206 08:55:04.082137   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.082145   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:04.082152   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:04.082163   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:04.138719   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:04.138737   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:04.150252   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:04.150269   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:04.220848   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:04.209917   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.210563   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212138   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212692   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.214187   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:04.209917   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.210563   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212138   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212692   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.214187   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:04.220858   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:04.220868   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:04.293646   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:04.293665   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:06.823180   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:06.833518   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:06.833576   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:06.863092   54452 cri.go:89] found id: ""
	I1206 08:55:06.863106   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.863113   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:06.863119   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:06.863177   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:06.888504   54452 cri.go:89] found id: ""
	I1206 08:55:06.888519   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.888525   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:06.888530   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:06.888595   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:06.918175   54452 cri.go:89] found id: ""
	I1206 08:55:06.918189   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.918197   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:06.918202   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:06.918261   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:06.944460   54452 cri.go:89] found id: ""
	I1206 08:55:06.944473   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.944480   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:06.944485   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:06.944551   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:06.973765   54452 cri.go:89] found id: ""
	I1206 08:55:06.973778   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.973786   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:06.973791   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:06.973852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:06.999311   54452 cri.go:89] found id: ""
	I1206 08:55:06.999324   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.999331   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:06.999337   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:06.999415   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:07.027677   54452 cri.go:89] found id: ""
	I1206 08:55:07.027690   54452 logs.go:282] 0 containers: []
	W1206 08:55:07.027697   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:07.027705   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:07.027715   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:07.086320   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:07.086338   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:07.097607   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:07.097623   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:07.161897   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:07.153185   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.154007   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.155730   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.156339   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.158053   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:07.153185   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.154007   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.155730   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.156339   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.158053   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:07.161907   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:07.161919   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:07.224772   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:07.224792   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:09.768328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:09.778939   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:09.779000   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:09.805473   54452 cri.go:89] found id: ""
	I1206 08:55:09.805487   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.805494   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:09.805499   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:09.805557   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:09.830605   54452 cri.go:89] found id: ""
	I1206 08:55:09.830618   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.830625   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:09.830630   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:09.830689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:09.855855   54452 cri.go:89] found id: ""
	I1206 08:55:09.855869   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.855876   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:09.855881   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:09.855937   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:09.880900   54452 cri.go:89] found id: ""
	I1206 08:55:09.880913   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.880920   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:09.880925   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:09.880981   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:09.906796   54452 cri.go:89] found id: ""
	I1206 08:55:09.906810   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.906817   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:09.906822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:09.906882   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:09.932980   54452 cri.go:89] found id: ""
	I1206 08:55:09.932996   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.933004   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:09.933009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:09.933081   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:09.961870   54452 cri.go:89] found id: ""
	I1206 08:55:09.961884   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.961892   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:09.961900   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:09.961922   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:10.018106   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:10.018129   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:10.031414   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:10.031441   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:10.103678   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:10.092903   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.093952   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.095756   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.096440   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.098120   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:10.092903   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.093952   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.095756   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.096440   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.098120   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:10.103689   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:10.103700   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:10.167044   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:10.167063   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:12.697325   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:12.707894   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:12.707958   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:12.732888   54452 cri.go:89] found id: ""
	I1206 08:55:12.732902   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.732914   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:12.732919   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:12.732975   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:12.756939   54452 cri.go:89] found id: ""
	I1206 08:55:12.756953   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.756960   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:12.756965   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:12.757026   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:12.785954   54452 cri.go:89] found id: ""
	I1206 08:55:12.785967   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.785974   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:12.785979   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:12.786037   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:12.810560   54452 cri.go:89] found id: ""
	I1206 08:55:12.810574   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.810581   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:12.810586   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:12.810643   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:12.835829   54452 cri.go:89] found id: ""
	I1206 08:55:12.835844   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.835851   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:12.835856   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:12.835917   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:12.860638   54452 cri.go:89] found id: ""
	I1206 08:55:12.860653   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.860660   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:12.860665   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:12.860723   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:12.885721   54452 cri.go:89] found id: ""
	I1206 08:55:12.885734   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.885742   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:12.885750   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:12.885760   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:12.944772   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:12.944793   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:12.956560   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:12.956577   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:13.023566   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:13.013901   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.014692   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.016414   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.017110   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.019101   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:13.013901   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.014692   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.016414   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.017110   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.019101   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:13.023586   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:13.023596   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:13.086592   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:13.086612   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:15.617835   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:15.628437   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:15.628524   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:15.657209   54452 cri.go:89] found id: ""
	I1206 08:55:15.657223   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.657230   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:15.657235   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:15.657297   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:15.681664   54452 cri.go:89] found id: ""
	I1206 08:55:15.681678   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.681685   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:15.681690   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:15.681748   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:15.707568   54452 cri.go:89] found id: ""
	I1206 08:55:15.707581   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.707588   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:15.707594   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:15.707654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:15.733456   54452 cri.go:89] found id: ""
	I1206 08:55:15.733470   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.733493   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:15.733499   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:15.733558   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:15.758882   54452 cri.go:89] found id: ""
	I1206 08:55:15.758896   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.758903   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:15.758908   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:15.758967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:15.784184   54452 cri.go:89] found id: ""
	I1206 08:55:15.784198   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.784205   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:15.784210   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:15.784269   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:15.809166   54452 cri.go:89] found id: ""
	I1206 08:55:15.809178   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.809186   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:15.809194   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:15.809204   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:15.865479   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:15.865498   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:15.876370   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:15.876386   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:15.949255   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:15.940278   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.941080   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.942741   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.943482   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.945251   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:15.940278   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.941080   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.942741   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.943482   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.945251   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:15.949277   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:15.949289   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:16.012838   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:16.012858   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:18.547536   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:18.557857   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:18.557924   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:18.583107   54452 cri.go:89] found id: ""
	I1206 08:55:18.583120   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.583128   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:18.583132   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:18.583192   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:18.608251   54452 cri.go:89] found id: ""
	I1206 08:55:18.608264   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.608271   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:18.608276   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:18.608333   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:18.634059   54452 cri.go:89] found id: ""
	I1206 08:55:18.634073   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.634080   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:18.634085   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:18.634158   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:18.659252   54452 cri.go:89] found id: ""
	I1206 08:55:18.659266   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.659273   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:18.659278   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:18.659338   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:18.687529   54452 cri.go:89] found id: ""
	I1206 08:55:18.687542   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.687549   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:18.687554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:18.687611   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:18.716705   54452 cri.go:89] found id: ""
	I1206 08:55:18.716719   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.716726   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:18.716731   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:18.716790   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:18.743861   54452 cri.go:89] found id: ""
	I1206 08:55:18.743875   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.743882   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:18.743890   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:18.743900   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:18.800501   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:18.800520   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:18.811514   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:18.811531   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:18.877593   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:18.868814   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.869581   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871247   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871996   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.873734   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:18.868814   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.869581   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871247   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871996   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.873734   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:18.877603   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:18.877614   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:18.945147   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:18.945175   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:21.473372   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:21.484974   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:21.485036   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:21.516585   54452 cri.go:89] found id: ""
	I1206 08:55:21.516598   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.516606   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:21.516611   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:21.516670   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:21.543917   54452 cri.go:89] found id: ""
	I1206 08:55:21.543930   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.543937   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:21.543943   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:21.544006   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:21.581932   54452 cri.go:89] found id: ""
	I1206 08:55:21.581946   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.581953   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:21.581958   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:21.582017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:21.606796   54452 cri.go:89] found id: ""
	I1206 08:55:21.606810   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.606817   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:21.606822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:21.606885   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:21.632673   54452 cri.go:89] found id: ""
	I1206 08:55:21.632686   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.632693   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:21.632698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:21.632791   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:21.656595   54452 cri.go:89] found id: ""
	I1206 08:55:21.656609   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.656616   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:21.656621   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:21.656681   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:21.681710   54452 cri.go:89] found id: ""
	I1206 08:55:21.681723   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.681730   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:21.681738   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:21.681747   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:21.737731   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:21.737750   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:21.748929   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:21.748944   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:21.814714   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:21.804423   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.805260   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807123   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807866   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.809673   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:21.804423   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.805260   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807123   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807866   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.809673   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:21.814725   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:21.814737   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:21.878842   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:21.878860   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:24.408240   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:24.418359   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:24.418420   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:24.445088   54452 cri.go:89] found id: ""
	I1206 08:55:24.445102   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.445109   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:24.445115   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:24.445218   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:24.481785   54452 cri.go:89] found id: ""
	I1206 08:55:24.481799   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.481807   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:24.481812   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:24.481871   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:24.514861   54452 cri.go:89] found id: ""
	I1206 08:55:24.514875   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.514882   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:24.514888   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:24.514951   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:24.545514   54452 cri.go:89] found id: ""
	I1206 08:55:24.545528   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.545535   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:24.545540   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:24.545604   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:24.571688   54452 cri.go:89] found id: ""
	I1206 08:55:24.571703   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.571710   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:24.571715   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:24.571780   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:24.596172   54452 cri.go:89] found id: ""
	I1206 08:55:24.596192   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.596200   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:24.596205   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:24.596267   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:24.621684   54452 cri.go:89] found id: ""
	I1206 08:55:24.621698   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.621706   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:24.621713   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:24.621728   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:24.683261   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:24.683279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:24.717098   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:24.717115   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:24.774777   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:24.774797   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:24.786405   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:24.786422   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:24.852542   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:24.844316   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.844764   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846310   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846629   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.848126   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:24.844316   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.844764   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846310   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846629   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.848126   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:27.352798   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:27.363390   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:27.363453   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:27.390863   54452 cri.go:89] found id: ""
	I1206 08:55:27.390877   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.390884   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:27.390891   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:27.390950   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:27.419763   54452 cri.go:89] found id: ""
	I1206 08:55:27.419777   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.419784   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:27.419789   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:27.419843   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:27.443855   54452 cri.go:89] found id: ""
	I1206 08:55:27.443868   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.443875   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:27.443880   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:27.443937   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:27.472073   54452 cri.go:89] found id: ""
	I1206 08:55:27.472086   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.472093   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:27.472099   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:27.472157   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:27.505330   54452 cri.go:89] found id: ""
	I1206 08:55:27.505344   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.505352   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:27.505357   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:27.505414   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:27.533936   54452 cri.go:89] found id: ""
	I1206 08:55:27.533950   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.533957   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:27.533962   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:27.534017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:27.562283   54452 cri.go:89] found id: ""
	I1206 08:55:27.562296   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.562303   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:27.562311   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:27.562320   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:27.619092   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:27.619110   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:27.630324   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:27.630339   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:27.695241   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:27.686898   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.687546   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689114   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689707   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.691358   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:27.686898   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.687546   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689114   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689707   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.691358   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:27.695251   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:27.695266   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:27.757877   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:27.757895   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:30.286157   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:30.296567   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:30.296625   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:30.321390   54452 cri.go:89] found id: ""
	I1206 08:55:30.321405   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.321413   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:30.321418   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:30.321480   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:30.350054   54452 cri.go:89] found id: ""
	I1206 08:55:30.350068   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.350075   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:30.350083   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:30.350149   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:30.375330   54452 cri.go:89] found id: ""
	I1206 08:55:30.375350   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.375358   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:30.375363   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:30.375445   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:30.406133   54452 cri.go:89] found id: ""
	I1206 08:55:30.406146   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.406153   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:30.406158   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:30.406217   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:30.434180   54452 cri.go:89] found id: ""
	I1206 08:55:30.434195   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.434202   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:30.434207   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:30.434272   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:30.461023   54452 cri.go:89] found id: ""
	I1206 08:55:30.461037   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.461044   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:30.461049   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:30.461107   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:30.493229   54452 cri.go:89] found id: ""
	I1206 08:55:30.493243   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.493250   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:30.493268   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:30.493279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:30.556454   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:30.556473   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:30.567243   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:30.567258   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:30.630618   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:30.622515   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.623325   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.624965   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.625291   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.626789   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:30.622515   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.623325   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.624965   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.625291   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.626789   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:30.630628   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:30.630638   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:30.692365   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:30.692384   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:33.222243   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:33.233203   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:33.233264   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:33.259086   54452 cri.go:89] found id: ""
	I1206 08:55:33.259099   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.259107   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:33.259113   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:33.259175   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:33.285885   54452 cri.go:89] found id: ""
	I1206 08:55:33.285912   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.285920   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:33.285926   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:33.286002   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:33.313522   54452 cri.go:89] found id: ""
	I1206 08:55:33.313536   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.313543   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:33.313554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:33.313614   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:33.343303   54452 cri.go:89] found id: ""
	I1206 08:55:33.343318   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.343335   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:33.343341   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:33.343434   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:33.372461   54452 cri.go:89] found id: ""
	I1206 08:55:33.372475   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.372482   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:33.372488   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:33.372556   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:33.398660   54452 cri.go:89] found id: ""
	I1206 08:55:33.398674   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.398682   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:33.398695   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:33.398770   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:33.425653   54452 cri.go:89] found id: ""
	I1206 08:55:33.425667   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.425675   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:33.425683   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:33.425693   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:33.436575   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:33.436591   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:33.519919   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:33.509857   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.511357   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.512054   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.513835   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.514450   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:33.509857   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.511357   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.512054   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.513835   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.514450   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:33.519928   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:33.519939   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:33.584991   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:33.585010   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:33.617158   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:33.617175   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:36.180867   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:36.191295   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:36.191369   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:36.215504   54452 cri.go:89] found id: ""
	I1206 08:55:36.215518   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.215525   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:36.215530   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:36.215586   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:36.241860   54452 cri.go:89] found id: ""
	I1206 08:55:36.241874   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.241881   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:36.241886   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:36.241948   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:36.270206   54452 cri.go:89] found id: ""
	I1206 08:55:36.270220   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.270227   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:36.270232   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:36.270292   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:36.297638   54452 cri.go:89] found id: ""
	I1206 08:55:36.297651   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.297658   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:36.297663   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:36.297721   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:36.327655   54452 cri.go:89] found id: ""
	I1206 08:55:36.327681   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.327689   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:36.327694   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:36.327764   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:36.353797   54452 cri.go:89] found id: ""
	I1206 08:55:36.353811   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.353818   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:36.353825   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:36.353884   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:36.378781   54452 cri.go:89] found id: ""
	I1206 08:55:36.378795   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.378802   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:36.378810   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:36.378823   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:36.435517   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:36.435537   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:36.446663   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:36.446679   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:36.538183   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:36.527758   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.528583   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.530703   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.531276   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.534098   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:36.527758   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.528583   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.530703   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.531276   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.534098   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:36.538193   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:36.538203   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:36.601364   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:36.601383   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:39.129686   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:39.140306   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:39.140375   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:39.169862   54452 cri.go:89] found id: ""
	I1206 08:55:39.169876   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.169883   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:39.169889   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:39.169952   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:39.195755   54452 cri.go:89] found id: ""
	I1206 08:55:39.195771   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.195778   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:39.195784   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:39.195842   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:39.220719   54452 cri.go:89] found id: ""
	I1206 08:55:39.220732   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.220739   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:39.220744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:39.220801   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:39.249535   54452 cri.go:89] found id: ""
	I1206 08:55:39.249549   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.249556   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:39.249561   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:39.249620   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:39.281267   54452 cri.go:89] found id: ""
	I1206 08:55:39.281281   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.281288   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:39.281293   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:39.281379   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:39.306847   54452 cri.go:89] found id: ""
	I1206 08:55:39.306860   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.306867   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:39.306873   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:39.306933   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:39.334023   54452 cri.go:89] found id: ""
	I1206 08:55:39.334036   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.334057   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:39.334064   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:39.334073   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:39.363589   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:39.363604   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:39.420152   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:39.420169   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:39.430815   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:39.430830   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:39.513246   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:39.503975   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.504808   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.506588   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.507212   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.508933   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:39.503975   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.504808   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.506588   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.507212   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.508933   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:39.513256   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:39.513266   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:42.085786   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:42.098317   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:42.098387   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:42.134671   54452 cri.go:89] found id: ""
	I1206 08:55:42.134686   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.134695   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:42.134705   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:42.134775   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:42.167474   54452 cri.go:89] found id: ""
	I1206 08:55:42.167489   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.167498   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:42.167505   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:42.167575   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:42.202078   54452 cri.go:89] found id: ""
	I1206 08:55:42.202093   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.202100   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:42.202106   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:42.202171   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:42.228525   54452 cri.go:89] found id: ""
	I1206 08:55:42.228539   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.228546   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:42.228552   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:42.228621   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:42.257322   54452 cri.go:89] found id: ""
	I1206 08:55:42.257337   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.257344   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:42.257350   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:42.257457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:42.284221   54452 cri.go:89] found id: ""
	I1206 08:55:42.284235   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.284253   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:42.284259   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:42.284329   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:42.311654   54452 cri.go:89] found id: ""
	I1206 08:55:42.311668   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.311675   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:42.311683   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:42.311694   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:42.368273   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:42.368294   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:42.379477   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:42.379493   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:42.443515   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:42.434726   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.435504   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437161   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437774   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.439236   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:42.434726   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.435504   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437161   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437774   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.439236   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:42.443526   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:42.443543   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:42.512858   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:42.512878   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:45.043040   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:45.068009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:45.068076   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:45.109801   54452 cri.go:89] found id: ""
	I1206 08:55:45.109815   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.109823   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:45.109829   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:45.109896   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:45.149825   54452 cri.go:89] found id: ""
	I1206 08:55:45.149841   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.149849   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:45.149855   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:45.149929   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:45.187416   54452 cri.go:89] found id: ""
	I1206 08:55:45.187433   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.187441   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:45.187446   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:45.187520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:45.235891   54452 cri.go:89] found id: ""
	I1206 08:55:45.235908   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.235916   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:45.235922   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:45.236066   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:45.279650   54452 cri.go:89] found id: ""
	I1206 08:55:45.279665   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.279673   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:45.279681   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:45.279750   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:45.325794   54452 cri.go:89] found id: ""
	I1206 08:55:45.325844   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.325871   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:45.325893   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:45.325962   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:45.357237   54452 cri.go:89] found id: ""
	I1206 08:55:45.357251   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.357258   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:45.357266   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:45.357291   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:45.385704   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:45.385720   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:45.442819   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:45.442837   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:45.454504   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:45.454523   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:45.547110   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:45.538939   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.539311   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.540633   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.541395   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.542989   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:45.538939   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.539311   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.540633   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.541395   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.542989   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:45.547119   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:45.547133   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:48.116344   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:48.126956   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:48.127022   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:48.152657   54452 cri.go:89] found id: ""
	I1206 08:55:48.152671   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.152678   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:48.152684   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:48.152743   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:48.182395   54452 cri.go:89] found id: ""
	I1206 08:55:48.182409   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.182417   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:48.182422   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:48.182494   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:48.211297   54452 cri.go:89] found id: ""
	I1206 08:55:48.211310   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.211327   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:48.211333   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:48.211402   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:48.236544   54452 cri.go:89] found id: ""
	I1206 08:55:48.236558   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.236565   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:48.236571   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:48.236627   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:48.262553   54452 cri.go:89] found id: ""
	I1206 08:55:48.262570   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.262582   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:48.262587   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:48.262680   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:48.295466   54452 cri.go:89] found id: ""
	I1206 08:55:48.295488   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.295495   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:48.295506   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:48.295586   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:48.321818   54452 cri.go:89] found id: ""
	I1206 08:55:48.321830   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.321837   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:48.321845   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:48.321856   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:48.378211   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:48.378229   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:48.389232   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:48.389255   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:48.456700   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:48.448592   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.449583   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.450577   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.451171   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.452831   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:48.448592   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.449583   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.450577   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.451171   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.452831   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:48.456711   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:48.456720   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:48.523317   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:48.523335   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:51.052796   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:51.063850   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:51.063912   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:51.089613   54452 cri.go:89] found id: ""
	I1206 08:55:51.089628   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.089635   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:51.089643   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:51.089727   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:51.116588   54452 cri.go:89] found id: ""
	I1206 08:55:51.116601   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.116609   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:51.116614   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:51.116679   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:51.146172   54452 cri.go:89] found id: ""
	I1206 08:55:51.146186   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.146193   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:51.146199   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:51.146266   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:51.172046   54452 cri.go:89] found id: ""
	I1206 08:55:51.172071   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.172078   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:51.172084   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:51.172163   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:51.200464   54452 cri.go:89] found id: ""
	I1206 08:55:51.200477   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.200495   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:51.200501   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:51.200561   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:51.229170   54452 cri.go:89] found id: ""
	I1206 08:55:51.229184   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.229191   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:51.229196   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:51.229254   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:51.254375   54452 cri.go:89] found id: ""
	I1206 08:55:51.254389   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.254396   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:51.254403   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:51.254413   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:51.317370   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:51.317390   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:51.344624   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:51.344642   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:51.402739   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:51.402759   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:51.413613   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:51.413629   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:51.483207   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:51.470850   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.471424   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.472991   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.473416   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.475113   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:51.470850   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.471424   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.472991   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.473416   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.475113   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:53.983859   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:53.997260   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:53.997326   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:54.024774   54452 cri.go:89] found id: ""
	I1206 08:55:54.024788   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.024795   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:54.024801   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:54.024866   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:54.050802   54452 cri.go:89] found id: ""
	I1206 08:55:54.050830   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.050837   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:54.050842   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:54.050911   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:54.079419   54452 cri.go:89] found id: ""
	I1206 08:55:54.079433   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.079440   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:54.079446   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:54.079517   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:54.104851   54452 cri.go:89] found id: ""
	I1206 08:55:54.104864   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.104871   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:54.104876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:54.104933   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:54.133815   54452 cri.go:89] found id: ""
	I1206 08:55:54.133829   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.133847   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:54.133853   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:54.133909   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:54.163047   54452 cri.go:89] found id: ""
	I1206 08:55:54.163071   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.163078   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:54.163083   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:54.163150   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:54.190227   54452 cri.go:89] found id: ""
	I1206 08:55:54.190242   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.190249   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:54.190263   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:54.190273   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:54.246189   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:54.246208   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:54.257068   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:54.257083   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:54.322094   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:54.313214   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.313895   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.315763   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.316388   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.318125   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:54.313214   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.313895   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.315763   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.316388   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.318125   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:54.322104   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:54.322114   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:54.385131   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:54.385150   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:56.917265   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:56.927438   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:56.927499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:56.951596   54452 cri.go:89] found id: ""
	I1206 08:55:56.951611   54452 logs.go:282] 0 containers: []
	W1206 08:55:56.951618   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:56.951623   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:56.951685   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:56.975635   54452 cri.go:89] found id: ""
	I1206 08:55:56.975649   54452 logs.go:282] 0 containers: []
	W1206 08:55:56.975656   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:56.975661   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:56.975718   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:57.005275   54452 cri.go:89] found id: ""
	I1206 08:55:57.005289   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.005296   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:57.005302   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:57.005370   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:57.031301   54452 cri.go:89] found id: ""
	I1206 08:55:57.031315   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.031333   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:57.031339   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:57.031422   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:57.057133   54452 cri.go:89] found id: ""
	I1206 08:55:57.057146   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.057153   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:57.057159   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:57.057221   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:57.081358   54452 cri.go:89] found id: ""
	I1206 08:55:57.081371   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.081378   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:57.081384   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:57.081442   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:57.116018   54452 cri.go:89] found id: ""
	I1206 08:55:57.116033   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.116049   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:57.116057   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:57.116067   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:57.171598   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:57.171615   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:57.182153   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:57.182169   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:57.245457   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:57.237416   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.237828   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.239402   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.240057   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.241674   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:57.237416   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.237828   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.239402   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.240057   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.241674   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:57.245466   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:57.245476   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:57.307969   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:57.307987   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:59.836840   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:59.846983   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:59.847044   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:59.871818   54452 cri.go:89] found id: ""
	I1206 08:55:59.871831   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.871838   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:59.871844   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:59.871904   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:59.896695   54452 cri.go:89] found id: ""
	I1206 08:55:59.896709   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.896716   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:59.896721   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:59.896787   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:59.921887   54452 cri.go:89] found id: ""
	I1206 08:55:59.921911   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.921918   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:59.921924   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:59.921998   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:59.948824   54452 cri.go:89] found id: ""
	I1206 08:55:59.948837   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.948845   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:59.948850   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:59.948908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:59.974553   54452 cri.go:89] found id: ""
	I1206 08:55:59.974567   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.974575   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:59.974580   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:59.974638   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:00.057731   54452 cri.go:89] found id: ""
	I1206 08:56:00.057783   54452 logs.go:282] 0 containers: []
	W1206 08:56:00.057791   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:00.057798   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:00.058035   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:00.191639   54452 cri.go:89] found id: ""
	I1206 08:56:00.191655   54452 logs.go:282] 0 containers: []
	W1206 08:56:00.191663   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:00.191671   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:00.191685   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:00.488607   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:00.462504   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.463297   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477164   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477991   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.479845   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:00.462504   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.463297   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477164   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477991   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.479845   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:00.488619   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:00.488632   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:00.602413   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:00.602434   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:00.637181   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:00.637200   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:00.701850   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:00.701868   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:03.215126   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:03.225397   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:03.225464   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:03.253115   54452 cri.go:89] found id: ""
	I1206 08:56:03.253128   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.253135   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:03.253143   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:03.253203   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:03.278704   54452 cri.go:89] found id: ""
	I1206 08:56:03.278717   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.278724   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:03.278730   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:03.278788   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:03.304400   54452 cri.go:89] found id: ""
	I1206 08:56:03.304414   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.304421   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:03.304427   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:03.304484   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:03.330915   54452 cri.go:89] found id: ""
	I1206 08:56:03.330927   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.330934   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:03.330939   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:03.331000   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:03.356123   54452 cri.go:89] found id: ""
	I1206 08:56:03.356136   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.356143   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:03.356149   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:03.356205   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:03.381497   54452 cri.go:89] found id: ""
	I1206 08:56:03.381511   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.381517   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:03.381523   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:03.381582   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:03.405821   54452 cri.go:89] found id: ""
	I1206 08:56:03.405834   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.405841   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:03.405849   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:03.405859   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:03.462897   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:03.462918   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:03.474378   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:03.474393   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:03.559522   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:03.549699   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.550344   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.552761   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.554016   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.555406   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:03.549699   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.550344   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.552761   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.554016   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.555406   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:03.559532   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:03.559545   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:03.626698   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:03.626716   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:06.154123   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:06.164837   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:06.164908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:06.191102   54452 cri.go:89] found id: ""
	I1206 08:56:06.191115   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.191123   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:06.191128   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:06.191194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:06.215815   54452 cri.go:89] found id: ""
	I1206 08:56:06.215829   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.215836   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:06.215841   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:06.215901   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:06.241431   54452 cri.go:89] found id: ""
	I1206 08:56:06.241445   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.241452   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:06.241457   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:06.241520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:06.266677   54452 cri.go:89] found id: ""
	I1206 08:56:06.266692   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.266699   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:06.266705   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:06.266768   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:06.290924   54452 cri.go:89] found id: ""
	I1206 08:56:06.290940   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.290948   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:06.290953   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:06.291015   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:06.315767   54452 cri.go:89] found id: ""
	I1206 08:56:06.315781   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.315788   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:06.315794   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:06.315852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:06.341271   54452 cri.go:89] found id: ""
	I1206 08:56:06.341284   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.341291   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:06.341298   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:06.341309   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:06.369777   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:06.369793   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:06.426976   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:06.426995   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:06.438111   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:06.438126   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:06.515349   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:06.504075   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.504993   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.506819   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.507499   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.510593   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:06.504075   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.504993   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.506819   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.507499   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.510593   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:06.515366   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:06.515403   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:09.084957   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:09.095918   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:09.095982   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:09.122788   54452 cri.go:89] found id: ""
	I1206 08:56:09.122802   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.122816   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:09.122822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:09.122886   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:09.150280   54452 cri.go:89] found id: ""
	I1206 08:56:09.150296   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.150303   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:09.150308   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:09.150370   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:09.175968   54452 cri.go:89] found id: ""
	I1206 08:56:09.175982   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.175989   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:09.175995   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:09.176054   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:09.205200   54452 cri.go:89] found id: ""
	I1206 08:56:09.205214   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.205221   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:09.205226   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:09.205284   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:09.229722   54452 cri.go:89] found id: ""
	I1206 08:56:09.229741   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.229758   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:09.229764   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:09.229823   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:09.253449   54452 cri.go:89] found id: ""
	I1206 08:56:09.253462   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.253469   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:09.253475   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:09.253532   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:09.278075   54452 cri.go:89] found id: ""
	I1206 08:56:09.278096   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.278103   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:09.278111   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:09.278127   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:09.334207   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:09.334224   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:09.345268   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:09.345284   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:09.411030   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:09.402872   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.403332   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.404994   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.405444   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.406900   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:09.402872   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.403332   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.404994   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.405444   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.406900   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:09.411046   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:09.411057   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:09.477250   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:09.477268   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:12.012172   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:12.023603   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:12.023666   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:12.049522   54452 cri.go:89] found id: ""
	I1206 08:56:12.049536   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.049544   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:12.049549   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:12.049616   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:12.079322   54452 cri.go:89] found id: ""
	I1206 08:56:12.079336   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.079343   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:12.079348   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:12.079434   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:12.104615   54452 cri.go:89] found id: ""
	I1206 08:56:12.104629   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.104636   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:12.104642   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:12.104698   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:12.129522   54452 cri.go:89] found id: ""
	I1206 08:56:12.129536   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.129542   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:12.129548   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:12.129603   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:12.154617   54452 cri.go:89] found id: ""
	I1206 08:56:12.154631   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.154637   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:12.154642   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:12.154701   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:12.180772   54452 cri.go:89] found id: ""
	I1206 08:56:12.180786   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.180793   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:12.180798   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:12.180860   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:12.204559   54452 cri.go:89] found id: ""
	I1206 08:56:12.204573   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.204585   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:12.204593   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:12.204605   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:12.267761   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:12.267780   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:12.295680   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:12.295696   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:12.355740   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:12.355759   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:12.367574   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:12.367589   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:12.438034   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:12.429169   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.429845   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.431592   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.432279   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.433870   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:12.429169   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.429845   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.431592   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.432279   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.433870   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:14.938326   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:14.948550   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:14.948610   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:14.974812   54452 cri.go:89] found id: ""
	I1206 08:56:14.974825   54452 logs.go:282] 0 containers: []
	W1206 08:56:14.974832   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:14.974843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:14.974901   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:15.033969   54452 cri.go:89] found id: ""
	I1206 08:56:15.033985   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.034002   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:15.034009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:15.034081   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:15.061932   54452 cri.go:89] found id: ""
	I1206 08:56:15.061946   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.061954   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:15.061959   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:15.062054   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:15.092717   54452 cri.go:89] found id: ""
	I1206 08:56:15.092731   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.092738   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:15.092744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:15.092804   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:15.119219   54452 cri.go:89] found id: ""
	I1206 08:56:15.119234   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.119242   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:15.119247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:15.119309   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:15.149464   54452 cri.go:89] found id: ""
	I1206 08:56:15.149477   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.149485   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:15.149490   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:15.149550   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:15.175614   54452 cri.go:89] found id: ""
	I1206 08:56:15.175628   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.175635   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:15.175643   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:15.175653   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:15.239770   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:15.239789   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:15.267874   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:15.267891   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:15.327229   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:15.327247   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:15.338540   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:15.338557   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:15.402152   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:15.393377   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.393759   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395003   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395462   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.397184   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:15.393377   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.393759   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395003   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395462   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.397184   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:17.903812   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:17.914165   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:17.914229   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:17.942343   54452 cri.go:89] found id: ""
	I1206 08:56:17.942357   54452 logs.go:282] 0 containers: []
	W1206 08:56:17.942363   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:17.942369   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:17.942427   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:17.972379   54452 cri.go:89] found id: ""
	I1206 08:56:17.972394   54452 logs.go:282] 0 containers: []
	W1206 08:56:17.972401   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:17.972406   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:17.972474   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:18.000726   54452 cri.go:89] found id: ""
	I1206 08:56:18.000740   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.000762   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:18.000768   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:18.000832   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:18.027348   54452 cri.go:89] found id: ""
	I1206 08:56:18.027406   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.027418   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:18.027431   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:18.027515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:18.055911   54452 cri.go:89] found id: ""
	I1206 08:56:18.055925   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.055933   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:18.055937   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:18.055994   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:18.085367   54452 cri.go:89] found id: ""
	I1206 08:56:18.085381   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.085392   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:18.085398   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:18.085466   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:18.110486   54452 cri.go:89] found id: ""
	I1206 08:56:18.110505   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.110513   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:18.110520   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:18.110531   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:18.174849   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:18.166389   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.166788   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168371   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168921   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.170369   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:18.166389   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.166788   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168371   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168921   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.170369   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:18.174859   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:18.174870   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:18.237754   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:18.237774   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:18.268012   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:18.268033   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:18.324652   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:18.324671   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:20.837649   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:20.848772   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:20.848844   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:20.875162   54452 cri.go:89] found id: ""
	I1206 08:56:20.875177   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.875184   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:20.875190   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:20.875260   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:20.900599   54452 cri.go:89] found id: ""
	I1206 08:56:20.900613   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.900620   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:20.900625   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:20.900683   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:20.928195   54452 cri.go:89] found id: ""
	I1206 08:56:20.928209   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.928216   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:20.928221   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:20.928288   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:20.952510   54452 cri.go:89] found id: ""
	I1206 08:56:20.952524   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.952532   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:20.952537   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:20.952594   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:20.976651   54452 cri.go:89] found id: ""
	I1206 08:56:20.976665   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.976672   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:20.976677   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:20.976747   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:21.003279   54452 cri.go:89] found id: ""
	I1206 08:56:21.003294   54452 logs.go:282] 0 containers: []
	W1206 08:56:21.003301   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:21.003306   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:21.003372   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:21.029382   54452 cri.go:89] found id: ""
	I1206 08:56:21.029396   54452 logs.go:282] 0 containers: []
	W1206 08:56:21.029403   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:21.029411   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:21.029421   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:21.091035   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:21.082849   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.083705   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085252   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085569   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.087050   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:21.082849   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.083705   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085252   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085569   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.087050   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:21.091049   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:21.091059   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:21.153084   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:21.153102   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:21.179992   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:21.180009   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:21.242302   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:21.242323   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:23.753350   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:23.764153   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:23.764212   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:23.794093   54452 cri.go:89] found id: ""
	I1206 08:56:23.794108   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.794115   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:23.794121   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:23.794192   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:23.818597   54452 cri.go:89] found id: ""
	I1206 08:56:23.818611   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.818618   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:23.818623   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:23.818681   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:23.845861   54452 cri.go:89] found id: ""
	I1206 08:56:23.845875   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.845882   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:23.845887   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:23.845951   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:23.871357   54452 cri.go:89] found id: ""
	I1206 08:56:23.871371   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.871423   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:23.871428   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:23.871486   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:23.895904   54452 cri.go:89] found id: ""
	I1206 08:56:23.895918   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.895926   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:23.895931   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:23.895998   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:23.921905   54452 cri.go:89] found id: ""
	I1206 08:56:23.921918   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.921925   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:23.921931   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:23.921988   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:23.946488   54452 cri.go:89] found id: ""
	I1206 08:56:23.946512   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.946520   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:23.946529   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:23.946539   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:24.002888   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:24.002907   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:24.015146   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:24.015170   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:24.085686   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:24.074786   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.075755   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.078321   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.079336   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.080390   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:24.074786   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.075755   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.078321   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.079336   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.080390   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:24.085697   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:24.085707   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:24.149216   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:24.149233   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:26.686769   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:26.697125   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:26.697183   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:26.728496   54452 cri.go:89] found id: ""
	I1206 08:56:26.728510   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.728527   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:26.728532   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:26.728597   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:26.755101   54452 cri.go:89] found id: ""
	I1206 08:56:26.755115   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.755130   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:26.755136   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:26.755195   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:26.785198   54452 cri.go:89] found id: ""
	I1206 08:56:26.785211   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.785229   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:26.785234   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:26.785298   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:26.816431   54452 cri.go:89] found id: ""
	I1206 08:56:26.816445   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.816452   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:26.816457   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:26.816515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:26.841875   54452 cri.go:89] found id: ""
	I1206 08:56:26.841889   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.841897   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:26.841902   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:26.841964   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:26.868358   54452 cri.go:89] found id: ""
	I1206 08:56:26.868372   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.868379   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:26.868384   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:26.868456   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:26.895528   54452 cri.go:89] found id: ""
	I1206 08:56:26.895541   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.895547   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:26.895555   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:26.895564   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:26.961952   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:26.961970   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:27.006459   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:27.006475   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:27.063666   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:27.063685   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:27.074993   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:27.075011   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:27.138852   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:27.130623   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.131223   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.132971   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.133326   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.134833   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:27.130623   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.131223   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.132971   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.133326   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.134833   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:29.639504   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:29.649774   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:29.649848   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:29.679629   54452 cri.go:89] found id: ""
	I1206 08:56:29.679642   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.679650   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:29.679655   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:29.679716   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:29.704535   54452 cri.go:89] found id: ""
	I1206 08:56:29.704550   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.704557   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:29.704563   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:29.704635   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:29.737627   54452 cri.go:89] found id: ""
	I1206 08:56:29.737640   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.737647   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:29.737652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:29.737709   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:29.767083   54452 cri.go:89] found id: ""
	I1206 08:56:29.767097   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.767104   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:29.767109   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:29.767166   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:29.793665   54452 cri.go:89] found id: ""
	I1206 08:56:29.793685   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.793693   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:29.793698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:29.793761   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:29.822695   54452 cri.go:89] found id: ""
	I1206 08:56:29.822709   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.822717   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:29.822722   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:29.822781   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:29.848347   54452 cri.go:89] found id: ""
	I1206 08:56:29.848360   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.848380   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:29.848389   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:29.848399   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:29.911329   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:29.911349   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:29.939981   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:29.939996   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:30.001274   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:30.001296   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:30.022683   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:30.022703   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:30.138182   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:30.128285   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.129603   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.130253   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132024   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132540   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:30.128285   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.129603   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.130253   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132024   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132540   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:32.638423   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:32.648554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:32.648613   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:32.672719   54452 cri.go:89] found id: ""
	I1206 08:56:32.672733   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.672741   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:32.672745   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:32.672808   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:32.697375   54452 cri.go:89] found id: ""
	I1206 08:56:32.697389   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.697396   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:32.697401   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:32.697456   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:32.730608   54452 cri.go:89] found id: ""
	I1206 08:56:32.730621   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.730628   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:32.730633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:32.730690   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:32.756886   54452 cri.go:89] found id: ""
	I1206 08:56:32.756900   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.756906   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:32.756911   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:32.756967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:32.786416   54452 cri.go:89] found id: ""
	I1206 08:56:32.786429   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.786436   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:32.786441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:32.786499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:32.817852   54452 cri.go:89] found id: ""
	I1206 08:56:32.817866   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.817873   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:32.817878   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:32.817948   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:32.847789   54452 cri.go:89] found id: ""
	I1206 08:56:32.847803   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.847810   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:32.847817   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:32.847826   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:32.913422   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:32.904590   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.905149   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907029   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907428   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.909140   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:32.904590   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.905149   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907029   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907428   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.909140   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:32.913432   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:32.913443   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:32.979128   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:32.979147   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:33.009021   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:33.009038   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:33.066116   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:33.066134   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:35.577653   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:35.587677   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:35.587739   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:35.612385   54452 cri.go:89] found id: ""
	I1206 08:56:35.612398   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.612405   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:35.612416   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:35.612474   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:35.639348   54452 cri.go:89] found id: ""
	I1206 08:56:35.639362   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.639369   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:35.639395   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:35.639457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:35.662406   54452 cri.go:89] found id: ""
	I1206 08:56:35.662420   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.662427   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:35.662432   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:35.662494   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:35.686450   54452 cri.go:89] found id: ""
	I1206 08:56:35.686464   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.686471   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:35.686476   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:35.686535   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:35.715902   54452 cri.go:89] found id: ""
	I1206 08:56:35.715915   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.715922   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:35.715927   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:35.715986   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:35.753483   54452 cri.go:89] found id: ""
	I1206 08:56:35.753496   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.753503   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:35.753509   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:35.753571   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:35.787475   54452 cri.go:89] found id: ""
	I1206 08:56:35.787488   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.787495   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:35.787509   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:35.787520   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:35.799521   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:35.799536   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:35.865541   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:35.856956   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.857477   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859150   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859621   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.861412   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:35.856956   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.857477   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859150   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859621   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.861412   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:35.865551   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:35.865562   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:35.928394   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:35.928412   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:35.960163   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:35.960178   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:38.518969   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:38.529441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:38.529503   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:38.556742   54452 cri.go:89] found id: ""
	I1206 08:56:38.556756   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.556764   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:38.556769   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:38.556828   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:38.585575   54452 cri.go:89] found id: ""
	I1206 08:56:38.585589   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.585596   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:38.585602   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:38.585675   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:38.610698   54452 cri.go:89] found id: ""
	I1206 08:56:38.610713   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.610721   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:38.610726   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:38.610799   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:38.635789   54452 cri.go:89] found id: ""
	I1206 08:56:38.635802   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.635809   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:38.635814   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:38.635875   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:38.664415   54452 cri.go:89] found id: ""
	I1206 08:56:38.664429   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.664436   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:38.664441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:38.664499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:38.692373   54452 cri.go:89] found id: ""
	I1206 08:56:38.692387   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.692394   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:38.692400   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:38.692463   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:38.717762   54452 cri.go:89] found id: ""
	I1206 08:56:38.717776   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.717784   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:38.717791   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:38.717804   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:38.761801   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:38.761816   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:38.823195   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:38.823214   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:38.834338   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:38.834354   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:38.902350   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:38.894283   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895054   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895859   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897382   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897703   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:38.894283   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895054   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895859   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897382   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897703   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:38.902361   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:38.902372   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:41.468409   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:41.478754   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:41.478820   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:41.506969   54452 cri.go:89] found id: ""
	I1206 08:56:41.506982   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.506989   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:41.506997   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:41.507057   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:41.531981   54452 cri.go:89] found id: ""
	I1206 08:56:41.531995   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.532002   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:41.532007   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:41.532067   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:41.556489   54452 cri.go:89] found id: ""
	I1206 08:56:41.556503   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.556511   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:41.556516   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:41.556578   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:41.582188   54452 cri.go:89] found id: ""
	I1206 08:56:41.582202   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.582209   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:41.582224   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:41.582297   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:41.608043   54452 cri.go:89] found id: ""
	I1206 08:56:41.608065   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.608073   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:41.608078   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:41.608149   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:41.636701   54452 cri.go:89] found id: ""
	I1206 08:56:41.636714   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.636722   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:41.636728   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:41.636786   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:41.661109   54452 cri.go:89] found id: ""
	I1206 08:56:41.661123   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.661131   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:41.661138   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:41.661147   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:41.718276   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:41.718293   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:41.731689   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:41.731704   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:41.813161   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:41.804518   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.805060   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.806862   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.807552   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.809239   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:41.804518   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.805060   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.806862   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.807552   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.809239   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:41.813171   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:41.813183   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:41.879169   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:41.879189   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:44.409328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:44.419475   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:44.419534   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:44.444626   54452 cri.go:89] found id: ""
	I1206 08:56:44.444640   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.444647   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:44.444652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:44.444709   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:44.469065   54452 cri.go:89] found id: ""
	I1206 08:56:44.469078   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.469085   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:44.469090   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:44.469154   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:44.492979   54452 cri.go:89] found id: ""
	I1206 08:56:44.492993   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.493000   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:44.493006   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:44.493065   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:44.517980   54452 cri.go:89] found id: ""
	I1206 08:56:44.517994   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.518012   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:44.518018   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:44.518084   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:44.550302   54452 cri.go:89] found id: ""
	I1206 08:56:44.550315   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.550322   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:44.550338   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:44.550411   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:44.574741   54452 cri.go:89] found id: ""
	I1206 08:56:44.574754   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.574773   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:44.574779   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:44.574844   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:44.599427   54452 cri.go:89] found id: ""
	I1206 08:56:44.599440   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.599447   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:44.599454   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:44.599464   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:44.655195   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:44.655213   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:44.666596   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:44.666611   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:44.743689   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:44.734701   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.735711   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737288   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737597   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.739087   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:44.734701   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.735711   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737288   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737597   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.739087   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:44.743706   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:44.743716   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:44.813114   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:44.813132   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:47.340486   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:47.350443   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:47.350502   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:47.381645   54452 cri.go:89] found id: ""
	I1206 08:56:47.381659   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.381666   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:47.381671   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:47.381732   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:47.408660   54452 cri.go:89] found id: ""
	I1206 08:56:47.408674   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.408681   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:47.408686   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:47.408751   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:47.434188   54452 cri.go:89] found id: ""
	I1206 08:56:47.434201   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.434208   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:47.434213   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:47.434272   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:47.463313   54452 cri.go:89] found id: ""
	I1206 08:56:47.463334   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.463342   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:47.463347   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:47.463437   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:47.491850   54452 cri.go:89] found id: ""
	I1206 08:56:47.491864   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.491871   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:47.491876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:47.491942   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:47.520200   54452 cri.go:89] found id: ""
	I1206 08:56:47.520214   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.520221   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:47.520226   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:47.520289   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:47.546930   54452 cri.go:89] found id: ""
	I1206 08:56:47.546943   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.546950   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:47.546958   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:47.546969   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:47.607002   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:47.607020   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:47.617961   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:47.617976   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:47.681928   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:47.673776   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.674574   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676165   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676631   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.678134   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:47.673776   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.674574   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676165   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676631   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.678134   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:47.681938   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:47.681949   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:47.749465   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:47.749483   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:50.280242   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:50.291127   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:50.291189   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:50.316285   54452 cri.go:89] found id: ""
	I1206 08:56:50.316299   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.316307   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:50.316312   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:50.316378   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:50.342947   54452 cri.go:89] found id: ""
	I1206 08:56:50.342961   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.342968   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:50.342973   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:50.343034   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:50.368308   54452 cri.go:89] found id: ""
	I1206 08:56:50.368322   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.368329   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:50.368334   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:50.368392   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:50.392557   54452 cri.go:89] found id: ""
	I1206 08:56:50.392571   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.392578   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:50.392583   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:50.392643   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:50.417455   54452 cri.go:89] found id: ""
	I1206 08:56:50.417469   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.417477   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:50.417482   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:50.417547   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:50.442791   54452 cri.go:89] found id: ""
	I1206 08:56:50.442805   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.442813   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:50.442818   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:50.442887   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:50.473290   54452 cri.go:89] found id: ""
	I1206 08:56:50.473304   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.473310   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:50.473318   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:50.473329   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:50.484225   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:50.484242   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:50.551034   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:50.542777   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.543204   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.544973   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.545586   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.547123   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:50.542777   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.543204   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.544973   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.545586   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.547123   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:50.551048   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:50.551059   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:50.614007   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:50.614025   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:50.642494   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:50.642510   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:53.201231   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:53.211652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:53.211712   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:53.237084   54452 cri.go:89] found id: ""
	I1206 08:56:53.237098   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.237106   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:53.237117   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:53.237179   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:53.265518   54452 cri.go:89] found id: ""
	I1206 08:56:53.265533   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.265541   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:53.265547   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:53.265619   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:53.291219   54452 cri.go:89] found id: ""
	I1206 08:56:53.291233   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.291242   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:53.291247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:53.291304   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:53.316119   54452 cri.go:89] found id: ""
	I1206 08:56:53.316135   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.316143   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:53.316148   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:53.316208   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:53.345553   54452 cri.go:89] found id: ""
	I1206 08:56:53.345566   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.345574   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:53.345579   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:53.345637   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:53.374116   54452 cri.go:89] found id: ""
	I1206 08:56:53.374130   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.374138   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:53.374144   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:53.374201   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:53.401450   54452 cri.go:89] found id: ""
	I1206 08:56:53.401463   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.401470   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:53.401488   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:53.401498   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:53.464628   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:53.464645   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:53.492208   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:53.492225   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:53.548199   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:53.548216   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:53.559872   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:53.559887   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:53.624790   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:53.616289   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.617036   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.618638   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.619245   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.620839   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:53.616289   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.617036   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.618638   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.619245   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.620839   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:56.126662   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:56.136918   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:56.136978   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:56.165346   54452 cri.go:89] found id: ""
	I1206 08:56:56.165359   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.165376   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:56.165382   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:56.165447   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:56.194525   54452 cri.go:89] found id: ""
	I1206 08:56:56.194538   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.194545   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:56.194562   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:56.194621   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:56.220295   54452 cri.go:89] found id: ""
	I1206 08:56:56.220309   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.220316   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:56.220321   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:56.220377   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:56.244567   54452 cri.go:89] found id: ""
	I1206 08:56:56.244580   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.244587   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:56.244592   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:56.244648   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:56.267992   54452 cri.go:89] found id: ""
	I1206 08:56:56.268005   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.268012   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:56.268018   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:56.268076   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:56.295817   54452 cri.go:89] found id: ""
	I1206 08:56:56.295830   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.295837   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:56.295843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:56.295904   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:56.319421   54452 cri.go:89] found id: ""
	I1206 08:56:56.319435   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.319442   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:56.319450   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:56.319460   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:56.350423   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:56.350439   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:56.407158   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:56.407176   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:56.417732   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:56.417747   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:56.488632   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:56.480052   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.480705   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.482573   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.483242   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.484311   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:56.480052   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.480705   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.482573   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.483242   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.484311   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:56.488642   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:56.488652   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:59.061980   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:59.072278   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:59.072339   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:59.101215   54452 cri.go:89] found id: ""
	I1206 08:56:59.101228   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.101235   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:59.101241   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:59.101302   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:59.127327   54452 cri.go:89] found id: ""
	I1206 08:56:59.127342   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.127349   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:59.127355   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:59.127442   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:59.152367   54452 cri.go:89] found id: ""
	I1206 08:56:59.152381   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.152388   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:59.152393   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:59.152461   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:59.176595   54452 cri.go:89] found id: ""
	I1206 08:56:59.176609   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.176616   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:59.176622   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:59.176680   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:59.201640   54452 cri.go:89] found id: ""
	I1206 08:56:59.201654   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.201661   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:59.201667   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:59.201725   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:59.228000   54452 cri.go:89] found id: ""
	I1206 08:56:59.228015   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.228023   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:59.228028   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:59.228097   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:59.254668   54452 cri.go:89] found id: ""
	I1206 08:56:59.254681   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.254688   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:59.254696   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:59.254707   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:59.284894   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:59.284910   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:59.342586   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:59.342604   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:59.354343   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:59.354368   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:59.422837   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:59.414293   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.414916   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416482   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416892   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.418605   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:59.414293   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.414916   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416482   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416892   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.418605   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:59.422847   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:59.422857   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:01.987724   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:02.004462   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:02.004525   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:02.037544   54452 cri.go:89] found id: ""
	I1206 08:57:02.037558   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.037565   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:02.037571   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:02.037629   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:02.064737   54452 cri.go:89] found id: ""
	I1206 08:57:02.064750   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.064759   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:02.064765   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:02.064822   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:02.090594   54452 cri.go:89] found id: ""
	I1206 08:57:02.090607   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.090615   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:02.090620   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:02.090677   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:02.118059   54452 cri.go:89] found id: ""
	I1206 08:57:02.118073   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.118080   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:02.118086   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:02.118142   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:02.147171   54452 cri.go:89] found id: ""
	I1206 08:57:02.147184   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.147191   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:02.147197   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:02.147258   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:02.178322   54452 cri.go:89] found id: ""
	I1206 08:57:02.178336   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.178343   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:02.178349   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:02.178409   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:02.206125   54452 cri.go:89] found id: ""
	I1206 08:57:02.206140   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.206148   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:02.206156   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:02.206166   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:02.268742   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:02.268760   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:02.298364   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:02.298379   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:02.360782   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:02.360799   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:02.372144   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:02.372159   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:02.440932   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:02.432342   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.433106   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435042   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435754   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.436799   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:02.432342   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.433106   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435042   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435754   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.436799   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:04.941190   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:04.951545   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:04.951607   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:04.989383   54452 cri.go:89] found id: ""
	I1206 08:57:04.989398   54452 logs.go:282] 0 containers: []
	W1206 08:57:04.989406   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:04.989413   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:04.989480   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:05.024563   54452 cri.go:89] found id: ""
	I1206 08:57:05.024580   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.024588   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:05.024593   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:05.024654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:05.054247   54452 cri.go:89] found id: ""
	I1206 08:57:05.054260   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.054267   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:05.054272   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:05.054332   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:05.079563   54452 cri.go:89] found id: ""
	I1206 08:57:05.079582   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.079589   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:05.079594   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:05.079654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:05.104268   54452 cri.go:89] found id: ""
	I1206 08:57:05.104281   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.104288   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:05.104294   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:05.104354   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:05.133366   54452 cri.go:89] found id: ""
	I1206 08:57:05.133389   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.133399   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:05.133404   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:05.133473   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:05.157604   54452 cri.go:89] found id: ""
	I1206 08:57:05.157618   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.157625   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:05.157633   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:05.157644   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:05.169011   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:05.169026   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:05.232729   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:05.223674   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.224539   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226385   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226913   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.228611   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:05.223674   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.224539   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226385   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226913   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.228611   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:05.232739   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:05.232750   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:05.295112   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:05.295130   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:05.323164   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:05.323180   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:07.880424   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:07.890491   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:07.890546   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:07.919674   54452 cri.go:89] found id: ""
	I1206 08:57:07.919688   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.919695   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:07.919702   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:07.919765   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:07.944058   54452 cri.go:89] found id: ""
	I1206 08:57:07.944072   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.944080   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:07.944085   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:07.944143   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:07.975197   54452 cri.go:89] found id: ""
	I1206 08:57:07.975211   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.975219   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:07.975223   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:07.975286   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:08.003528   54452 cri.go:89] found id: ""
	I1206 08:57:08.003551   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.003559   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:08.003565   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:08.003632   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:08.042231   54452 cri.go:89] found id: ""
	I1206 08:57:08.042244   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.042251   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:08.042264   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:08.042340   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:08.070769   54452 cri.go:89] found id: ""
	I1206 08:57:08.070783   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.070800   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:08.070806   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:08.070863   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:08.095705   54452 cri.go:89] found id: ""
	I1206 08:57:08.095722   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.095729   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:08.095736   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:08.095745   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:08.152794   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:08.152812   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:08.163981   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:08.164009   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:08.231637   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:08.223305   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.223828   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225446   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225934   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.227447   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:08.223305   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.223828   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225446   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225934   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.227447   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:08.231648   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:08.231659   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:08.294693   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:08.294710   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:10.824685   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:10.834735   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:10.834797   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:10.861282   54452 cri.go:89] found id: ""
	I1206 08:57:10.861297   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.861304   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:10.861309   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:10.861380   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:10.889560   54452 cri.go:89] found id: ""
	I1206 08:57:10.889573   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.889580   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:10.889585   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:10.889646   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:10.918582   54452 cri.go:89] found id: ""
	I1206 08:57:10.918597   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.918605   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:10.918611   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:10.918677   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:10.945055   54452 cri.go:89] found id: ""
	I1206 08:57:10.945068   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.945075   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:10.945081   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:10.945142   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:10.971779   54452 cri.go:89] found id: ""
	I1206 08:57:10.971807   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.971814   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:10.971820   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:10.971883   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:11.007014   54452 cri.go:89] found id: ""
	I1206 08:57:11.007028   54452 logs.go:282] 0 containers: []
	W1206 08:57:11.007035   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:11.007041   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:11.007103   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:11.033387   54452 cri.go:89] found id: ""
	I1206 08:57:11.033415   54452 logs.go:282] 0 containers: []
	W1206 08:57:11.033422   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:11.033431   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:11.033441   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:11.103950   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:11.094735   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.095599   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097342   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097718   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.099415   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:11.094735   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.095599   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097342   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097718   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.099415   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:11.103962   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:11.103972   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:11.168820   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:11.168839   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:11.199653   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:11.199669   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:11.258665   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:11.258682   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:13.770048   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:13.780437   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:13.780537   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:13.804490   54452 cri.go:89] found id: ""
	I1206 08:57:13.804504   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.804511   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:13.804517   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:13.804576   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:13.828142   54452 cri.go:89] found id: ""
	I1206 08:57:13.828156   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.828163   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:13.828173   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:13.828234   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:13.852993   54452 cri.go:89] found id: ""
	I1206 08:57:13.853006   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.853013   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:13.853017   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:13.853073   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:13.876970   54452 cri.go:89] found id: ""
	I1206 08:57:13.876983   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.876990   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:13.876996   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:13.877057   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:13.906173   54452 cri.go:89] found id: ""
	I1206 08:57:13.906189   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.906196   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:13.906201   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:13.906260   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:13.932656   54452 cri.go:89] found id: ""
	I1206 08:57:13.932670   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.932677   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:13.932682   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:13.932744   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:13.958494   54452 cri.go:89] found id: ""
	I1206 08:57:13.958507   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.958514   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:13.958522   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:13.958533   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:13.969906   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:13.969925   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:14.055494   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:14.045404   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.046095   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.048372   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.049321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.050244   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:14.045404   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.046095   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.048372   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.049321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.050244   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:14.055511   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:14.055523   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:14.119159   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:14.119179   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:14.151907   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:14.151925   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:16.720554   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:16.731520   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:16.731584   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:16.757438   54452 cri.go:89] found id: ""
	I1206 08:57:16.757452   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.757458   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:16.757463   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:16.757520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:16.782537   54452 cri.go:89] found id: ""
	I1206 08:57:16.782552   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.782559   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:16.782564   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:16.782619   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:16.811967   54452 cri.go:89] found id: ""
	I1206 08:57:16.811981   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.811988   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:16.811993   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:16.812051   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:16.840450   54452 cri.go:89] found id: ""
	I1206 08:57:16.840464   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.840471   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:16.840477   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:16.840553   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:16.865953   54452 cri.go:89] found id: ""
	I1206 08:57:16.865968   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.865975   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:16.865981   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:16.866043   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:16.890520   54452 cri.go:89] found id: ""
	I1206 08:57:16.890540   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.890547   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:16.890552   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:16.890611   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:16.915368   54452 cri.go:89] found id: ""
	I1206 08:57:16.915411   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.915418   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:16.915425   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:16.915435   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:16.975773   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:16.975792   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:16.990535   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:16.990557   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:17.060425   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:17.052130   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.052751   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054271   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054603   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.056244   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:17.052130   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.052751   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054271   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054603   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.056244   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:17.060435   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:17.060446   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:17.124040   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:17.124060   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:19.655902   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:19.666330   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:19.666398   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:19.695219   54452 cri.go:89] found id: ""
	I1206 08:57:19.695232   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.695239   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:19.695245   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:19.695309   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:19.720027   54452 cri.go:89] found id: ""
	I1206 08:57:19.720041   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.720048   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:19.720053   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:19.720112   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:19.745773   54452 cri.go:89] found id: ""
	I1206 08:57:19.745787   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.745794   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:19.745799   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:19.745858   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:19.770885   54452 cri.go:89] found id: ""
	I1206 08:57:19.770898   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.770905   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:19.770910   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:19.770970   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:19.797192   54452 cri.go:89] found id: ""
	I1206 08:57:19.797205   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.797212   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:19.797218   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:19.797278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:19.825222   54452 cri.go:89] found id: ""
	I1206 08:57:19.825236   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.825243   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:19.825248   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:19.825314   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:19.855303   54452 cri.go:89] found id: ""
	I1206 08:57:19.855317   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.855324   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:19.855332   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:19.855342   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:19.912412   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:19.912430   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:19.924673   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:19.924689   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:20.010098   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:19.995577   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998398   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998837   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.003925   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.004952   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:19.995577   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998398   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998837   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.003925   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.004952   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:20.010109   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:20.010121   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:20.081433   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:20.081453   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:22.615286   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:22.625653   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:22.625713   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:22.650708   54452 cri.go:89] found id: ""
	I1206 08:57:22.650721   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.650728   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:22.650734   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:22.650793   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:22.675795   54452 cri.go:89] found id: ""
	I1206 08:57:22.675809   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.675816   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:22.675821   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:22.675876   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:22.700140   54452 cri.go:89] found id: ""
	I1206 08:57:22.700153   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.700160   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:22.700165   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:22.700224   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:22.726855   54452 cri.go:89] found id: ""
	I1206 08:57:22.726869   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.726876   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:22.726882   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:22.726938   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:22.751934   54452 cri.go:89] found id: ""
	I1206 08:57:22.751947   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.751954   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:22.751960   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:22.752017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:22.780047   54452 cri.go:89] found id: ""
	I1206 08:57:22.780061   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.780068   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:22.780074   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:22.780132   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:22.804185   54452 cri.go:89] found id: ""
	I1206 08:57:22.804199   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.804206   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:22.804214   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:22.804230   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:22.814840   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:22.814855   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:22.881877   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:22.873545   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.874258   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.875884   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.876440   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.878086   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:22.873545   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.874258   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.875884   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.876440   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.878086   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:22.881887   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:22.881897   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:22.949826   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:22.949846   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:22.990802   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:22.990820   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:25.557401   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:25.567869   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:25.567931   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:25.593044   54452 cri.go:89] found id: ""
	I1206 08:57:25.593058   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.593065   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:25.593070   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:25.593131   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:25.621119   54452 cri.go:89] found id: ""
	I1206 08:57:25.621134   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.621141   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:25.621146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:25.621206   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:25.649977   54452 cri.go:89] found id: ""
	I1206 08:57:25.649991   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.649998   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:25.650003   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:25.650066   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:25.674573   54452 cri.go:89] found id: ""
	I1206 08:57:25.674586   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.674593   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:25.674598   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:25.674654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:25.700412   54452 cri.go:89] found id: ""
	I1206 08:57:25.700425   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.700432   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:25.700438   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:25.700501   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:25.726656   54452 cri.go:89] found id: ""
	I1206 08:57:25.726670   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.726686   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:25.726691   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:25.726760   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:25.751625   54452 cri.go:89] found id: ""
	I1206 08:57:25.751639   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.751646   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:25.751653   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:25.751664   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:25.812914   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:25.804895   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.805687   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807191   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807672   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.809146   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:25.804895   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.805687   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807191   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807672   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.809146   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:25.812924   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:25.812936   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:25.875880   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:25.875898   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:25.905301   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:25.905316   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:25.964301   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:25.964320   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:28.477584   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:28.487626   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:28.487685   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:28.516024   54452 cri.go:89] found id: ""
	I1206 08:57:28.516038   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.516045   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:28.516050   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:28.516109   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:28.542151   54452 cri.go:89] found id: ""
	I1206 08:57:28.542165   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.542172   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:28.542177   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:28.542234   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:28.569963   54452 cri.go:89] found id: ""
	I1206 08:57:28.569977   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.569984   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:28.569989   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:28.570047   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:28.594336   54452 cri.go:89] found id: ""
	I1206 08:57:28.594350   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.594357   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:28.594362   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:28.594421   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:28.620834   54452 cri.go:89] found id: ""
	I1206 08:57:28.620846   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.620854   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:28.620859   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:28.620916   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:28.645672   54452 cri.go:89] found id: ""
	I1206 08:57:28.645686   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.645693   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:28.645698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:28.645762   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:28.670982   54452 cri.go:89] found id: ""
	I1206 08:57:28.670997   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.671004   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:28.671011   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:28.671022   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:28.729216   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:28.729234   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:28.741378   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:28.741394   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:28.808285   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:28.799664   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.800557   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802319   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802654   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.804202   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:28.799664   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.800557   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802319   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802654   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.804202   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:28.808296   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:28.808308   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:28.872187   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:28.872205   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:31.410802   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:31.421507   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:31.421567   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:31.449192   54452 cri.go:89] found id: ""
	I1206 08:57:31.449206   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.449213   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:31.449219   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:31.449278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:31.479043   54452 cri.go:89] found id: ""
	I1206 08:57:31.479057   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.479070   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:31.479075   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:31.479138   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:31.504010   54452 cri.go:89] found id: ""
	I1206 08:57:31.504024   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.504031   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:31.504036   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:31.504094   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:31.529789   54452 cri.go:89] found id: ""
	I1206 08:57:31.529807   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.529818   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:31.529824   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:31.529890   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:31.555332   54452 cri.go:89] found id: ""
	I1206 08:57:31.555346   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.555354   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:31.555359   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:31.555449   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:31.585896   54452 cri.go:89] found id: ""
	I1206 08:57:31.585909   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.585916   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:31.585922   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:31.585980   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:31.610938   54452 cri.go:89] found id: ""
	I1206 08:57:31.610950   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.610958   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:31.610965   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:31.610975   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:31.667535   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:31.667553   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:31.680211   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:31.680234   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:31.750810   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:31.742704   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.743477   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745233   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745766   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.746756   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:31.742704   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.743477   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745233   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745766   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.746756   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:31.750821   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:31.750833   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:31.813960   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:31.813983   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:34.341858   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:34.352097   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:34.352170   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:34.379126   54452 cri.go:89] found id: ""
	I1206 08:57:34.379140   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.379148   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:34.379153   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:34.379211   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:34.404136   54452 cri.go:89] found id: ""
	I1206 08:57:34.404150   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.404158   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:34.404163   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:34.404222   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:34.429318   54452 cri.go:89] found id: ""
	I1206 08:57:34.429333   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.429340   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:34.429346   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:34.429410   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:34.454607   54452 cri.go:89] found id: ""
	I1206 08:57:34.454621   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.454628   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:34.454633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:34.454689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:34.481702   54452 cri.go:89] found id: ""
	I1206 08:57:34.481715   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.481722   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:34.481727   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:34.481786   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:34.506222   54452 cri.go:89] found id: ""
	I1206 08:57:34.506236   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.506242   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:34.506247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:34.506307   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:34.531791   54452 cri.go:89] found id: ""
	I1206 08:57:34.531804   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.531811   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:34.531818   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:34.531829   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:34.542352   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:34.542368   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:34.605646   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:34.597261   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.597943   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.599605   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.600148   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.601815   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:34.597261   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.597943   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.599605   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.600148   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.601815   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:34.605655   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:34.605666   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:34.668800   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:34.668818   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:34.703806   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:34.703822   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:37.265019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:37.275013   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:37.275073   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:37.300683   54452 cri.go:89] found id: ""
	I1206 08:57:37.300696   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.300704   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:37.300710   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:37.300768   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:37.326083   54452 cri.go:89] found id: ""
	I1206 08:57:37.326096   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.326103   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:37.326109   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:37.326169   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:37.354381   54452 cri.go:89] found id: ""
	I1206 08:57:37.354395   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.354402   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:37.354407   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:37.354467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:37.379048   54452 cri.go:89] found id: ""
	I1206 08:57:37.379062   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.379069   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:37.379074   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:37.379132   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:37.407083   54452 cri.go:89] found id: ""
	I1206 08:57:37.407097   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.407104   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:37.407120   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:37.407179   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:37.430756   54452 cri.go:89] found id: ""
	I1206 08:57:37.430769   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.430777   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:37.430782   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:37.430839   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:37.459469   54452 cri.go:89] found id: ""
	I1206 08:57:37.459483   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.459490   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:37.459498   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:37.459510   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:37.470844   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:37.470860   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:37.538783   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:37.530038   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.530744   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.532506   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.533299   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.534867   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:37.530038   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.530744   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.532506   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.533299   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.534867   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:37.538793   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:37.538804   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:37.604935   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:37.604954   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:37.637474   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:37.637491   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:40.195736   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:40.205728   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:40.205790   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:40.242821   54452 cri.go:89] found id: ""
	I1206 08:57:40.242834   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.242841   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:40.242847   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:40.242902   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:40.284606   54452 cri.go:89] found id: ""
	I1206 08:57:40.284620   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.284628   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:40.284633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:40.284689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:40.317256   54452 cri.go:89] found id: ""
	I1206 08:57:40.317270   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.317277   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:40.317282   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:40.317339   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:40.341890   54452 cri.go:89] found id: ""
	I1206 08:57:40.341904   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.341911   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:40.341916   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:40.341971   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:40.365889   54452 cri.go:89] found id: ""
	I1206 08:57:40.365902   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.365909   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:40.365915   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:40.365970   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:40.390366   54452 cri.go:89] found id: ""
	I1206 08:57:40.390379   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.390386   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:40.390393   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:40.390451   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:40.414154   54452 cri.go:89] found id: ""
	I1206 08:57:40.414168   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.414174   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:40.414182   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:40.414192   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:40.425672   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:40.425688   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:40.491793   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:40.479914   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.480484   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485346   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485909   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.487745   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:40.479914   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.480484   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485346   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485909   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.487745   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:40.491804   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:40.491815   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:40.554734   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:40.554754   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:40.585496   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:40.585511   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:43.142927   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:43.152875   54452 kubeadm.go:602] duration metric: took 4m4.203206664s to restartPrimaryControlPlane
	W1206 08:57:43.152943   54452 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 08:57:43.153014   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 08:57:43.558005   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:57:43.571431   54452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:57:43.579298   54452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 08:57:43.579354   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:57:43.587284   54452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 08:57:43.587293   54452 kubeadm.go:158] found existing configuration files:
	
	I1206 08:57:43.587347   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:57:43.595209   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 08:57:43.595263   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 08:57:43.602677   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:57:43.610821   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 08:57:43.610884   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:57:43.618219   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:57:43.625867   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 08:57:43.625922   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:57:43.633373   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:57:43.640818   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 08:57:43.640880   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:57:43.648275   54452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 08:57:43.690498   54452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 08:57:43.690790   54452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 08:57:43.763599   54452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 08:57:43.763663   54452 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 08:57:43.763697   54452 kubeadm.go:319] OS: Linux
	I1206 08:57:43.763740   54452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 08:57:43.763787   54452 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 08:57:43.763833   54452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 08:57:43.763880   54452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 08:57:43.763928   54452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 08:57:43.763975   54452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 08:57:43.764019   54452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 08:57:43.764066   54452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 08:57:43.764112   54452 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 08:57:43.838707   54452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 08:57:43.838810   54452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 08:57:43.838899   54452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 08:57:43.843797   54452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 08:57:43.849166   54452 out.go:252]   - Generating certificates and keys ...
	I1206 08:57:43.849248   54452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 08:57:43.849312   54452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 08:57:43.849386   54452 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 08:57:43.849451   54452 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 08:57:43.849520   54452 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 08:57:43.849572   54452 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 08:57:43.849633   54452 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 08:57:43.849693   54452 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 08:57:43.849766   54452 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 08:57:43.849838   54452 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 08:57:43.849874   54452 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 08:57:43.849928   54452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 08:57:44.005203   54452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 08:57:44.248156   54452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 08:57:44.506601   54452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 08:57:44.747606   54452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 08:57:44.875144   54452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 08:57:44.875922   54452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 08:57:44.878561   54452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 08:57:44.881876   54452 out.go:252]   - Booting up control plane ...
	I1206 08:57:44.881976   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 08:57:44.882052   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 08:57:44.882117   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 08:57:44.902770   54452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 08:57:44.902884   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 08:57:44.910887   54452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 08:57:44.915557   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 08:57:44.915618   54452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 08:57:45.072565   54452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 08:57:45.072679   54452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:01:45.073201   54452 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00139193s
	I1206 09:01:45.073230   54452 kubeadm.go:319] 
	I1206 09:01:45.073292   54452 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:01:45.073325   54452 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:01:45.073460   54452 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:01:45.073475   54452 kubeadm.go:319] 
	I1206 09:01:45.073605   54452 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:01:45.073641   54452 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:01:45.073671   54452 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:01:45.073674   54452 kubeadm.go:319] 
	I1206 09:01:45.079541   54452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:01:45.080019   54452 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:01:45.080137   54452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:01:45.080372   54452 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 09:01:45.080377   54452 kubeadm.go:319] 
	W1206 09:01:45.080611   54452 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00139193s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:01:45.080716   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:01:45.081059   54452 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:01:45.527784   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:01:45.541714   54452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:01:45.541768   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:01:45.549724   54452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:01:45.549735   54452 kubeadm.go:158] found existing configuration files:
	
	I1206 09:01:45.549787   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 09:01:45.557657   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:01:45.557710   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:01:45.565116   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 09:01:45.572963   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:01:45.573017   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:01:45.580604   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 09:01:45.588212   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:01:45.588267   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:01:45.595779   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 09:01:45.604082   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:01:45.604137   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:01:45.612084   54452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:01:45.650374   54452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:01:45.650428   54452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:01:45.720642   54452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:01:45.720706   54452 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:01:45.720740   54452 kubeadm.go:319] OS: Linux
	I1206 09:01:45.720783   54452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:01:45.720831   54452 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:01:45.720876   54452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:01:45.720923   54452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:01:45.720970   54452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:01:45.721017   54452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:01:45.721061   54452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:01:45.721107   54452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:01:45.721153   54452 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:01:45.786361   54452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:01:45.786476   54452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:01:45.786571   54452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:01:45.791901   54452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:01:45.795433   54452 out.go:252]   - Generating certificates and keys ...
	I1206 09:01:45.795514   54452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:01:45.795578   54452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:01:45.795654   54452 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 09:01:45.795714   54452 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 09:01:45.795783   54452 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 09:01:45.795835   54452 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 09:01:45.795898   54452 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 09:01:45.795958   54452 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 09:01:45.796032   54452 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 09:01:45.796104   54452 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 09:01:45.796185   54452 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 09:01:45.796240   54452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:01:45.935718   54452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:01:46.055895   54452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:01:46.294260   54452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:01:46.619812   54452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:01:46.778456   54452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:01:46.779211   54452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:01:46.782067   54452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:01:46.785434   54452 out.go:252]   - Booting up control plane ...
	I1206 09:01:46.785536   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:01:46.785617   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:01:46.785688   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:01:46.805726   54452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:01:46.805831   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:01:46.814430   54452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:01:46.816546   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:01:46.816591   54452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:01:46.952811   54452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:01:46.952924   54452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:05:46.951725   54452 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00022284s
	I1206 09:05:46.951748   54452 kubeadm.go:319] 
	I1206 09:05:46.951804   54452 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:05:46.951836   54452 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:05:46.951939   54452 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:05:46.951944   54452 kubeadm.go:319] 
	I1206 09:05:46.952047   54452 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:05:46.952078   54452 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:05:46.952108   54452 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:05:46.952111   54452 kubeadm.go:319] 
	I1206 09:05:46.956655   54452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:05:46.957065   54452 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:05:46.957172   54452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:05:46.957405   54452 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 09:05:46.957409   54452 kubeadm.go:319] 
	I1206 09:05:46.957479   54452 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:05:46.957537   54452 kubeadm.go:403] duration metric: took 12m8.043807841s to StartCluster
	I1206 09:05:46.957567   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:05:46.957632   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:05:47.005263   54452 cri.go:89] found id: ""
	I1206 09:05:47.005276   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.005284   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 09:05:47.005289   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:05:47.005348   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:05:47.039824   54452 cri.go:89] found id: ""
	I1206 09:05:47.039837   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.039844   54452 logs.go:284] No container was found matching "etcd"
	I1206 09:05:47.039849   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:05:47.039907   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:05:47.069199   54452 cri.go:89] found id: ""
	I1206 09:05:47.069215   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.069222   54452 logs.go:284] No container was found matching "coredns"
	I1206 09:05:47.069228   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:05:47.069290   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:05:47.094120   54452 cri.go:89] found id: ""
	I1206 09:05:47.094134   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.094141   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 09:05:47.094146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:05:47.094204   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:05:47.117873   54452 cri.go:89] found id: ""
	I1206 09:05:47.117887   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.117895   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:05:47.117900   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:05:47.117957   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:05:47.141782   54452 cri.go:89] found id: ""
	I1206 09:05:47.141796   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.141803   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 09:05:47.141809   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:05:47.141869   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:05:47.167265   54452 cri.go:89] found id: ""
	I1206 09:05:47.167280   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.167287   54452 logs.go:284] No container was found matching "kindnet"
	I1206 09:05:47.167295   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 09:05:47.167314   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:05:47.224071   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 09:05:47.224090   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:05:47.235798   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:05:47.235814   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:05:47.303156   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:05:47.295299   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.295956   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297451   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297881   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.299336   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 09:05:47.295299   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.295956   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297451   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297881   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.299336   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:05:47.303181   54452 logs.go:123] Gathering logs for containerd ...
	I1206 09:05:47.303191   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:05:47.366843   54452 logs.go:123] Gathering logs for container status ...
	I1206 09:05:47.366863   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 09:05:47.396270   54452 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 09:05:47.396302   54452 out.go:285] * 
	W1206 09:05:47.396359   54452 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:05:47.396374   54452 out.go:285] * 
	W1206 09:05:47.398505   54452 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:05:47.405628   54452 out.go:203] 
	W1206 09:05:47.408588   54452 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:05:47.408634   54452 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 09:05:47.408679   54452 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 09:05:47.411976   54452 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948296356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948312964Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948379313Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948412085Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948441403Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948462491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948482866Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948510698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948529111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948562713Z" level=info msg="Connect containerd service"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948903673Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.949608593Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967402561Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967484402Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967550135Z" level=info msg="Start subscribing containerd event"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967692902Z" level=info msg="Start recovering state"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019042107Z" level=info msg="Start event monitor"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019110196Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019122955Z" level=info msg="Start streaming server"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019132310Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019140786Z" level=info msg="runtime interface starting up..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019147531Z" level=info msg="starting plugins..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019160085Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 08:53:37 functional-090986 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.020711198Z" level=info msg="containerd successfully booted in 0.094795s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:07:40.435454   23056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:40.435866   23056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:40.437585   23056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:40.438032   23056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:40.439543   23056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 09:07:40 up 50 min,  0 user,  load average: 1.83, 0.56, 0.46
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 09:07:37 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:37 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 468.
	Dec 06 09:07:37 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:37 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:38 functional-090986 kubelet[22887]: E1206 09:07:38.038723   22887 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:38 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:38 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:38 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 469.
	Dec 06 09:07:38 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:38 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:38 functional-090986 kubelet[22917]: E1206 09:07:38.766306   22917 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:38 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:38 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:39 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 470.
	Dec 06 09:07:39 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:39 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:39 functional-090986 kubelet[22963]: E1206 09:07:39.522697   22963 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:39 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:39 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:40 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 471.
	Dec 06 09:07:40 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:40 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:40 functional-090986 kubelet[23013]: E1206 09:07:40.286049   23013 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:40 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:40 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (365.833656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-090986 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-090986 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (57.33813ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-090986 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-090986 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-090986 describe po hello-node-connect: exit status 1 (56.331195ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-090986 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-090986 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-090986 logs -l app=hello-node-connect: exit status 1 (71.055368ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-090986 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-090986 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-090986 describe svc hello-node-connect: exit status 1 (73.494711ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-090986 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (325.318825ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-090986 cache reload                                                                                                                               │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ ssh     │ functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │ 06 Dec 25 08:53 UTC │
	│ kubectl │ functional-090986 kubectl -- --context functional-090986 get pods                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	│ start   │ -p functional-090986 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:53 UTC │                     │
	│ config  │ functional-090986 config unset cpus                                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ cp      │ functional-090986 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ config  │ functional-090986 config get cpus                                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │                     │
	│ config  │ functional-090986 config set cpus 2                                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ config  │ functional-090986 config get cpus                                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ config  │ functional-090986 config unset cpus                                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ config  │ functional-090986 config get cpus                                                                                                                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │                     │
	│ ssh     │ functional-090986 ssh -n functional-090986 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ ssh     │ functional-090986 ssh echo hello                                                                                                                             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ cp      │ functional-090986 cp functional-090986:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1959122984/001/cp-test.txt │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ ssh     │ functional-090986 ssh cat /etc/hostname                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ tunnel  │ functional-090986 tunnel --alsologtostderr                                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │                     │
	│ tunnel  │ functional-090986 tunnel --alsologtostderr                                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │                     │
	│ ssh     │ functional-090986 ssh -n functional-090986 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ tunnel  │ functional-090986 tunnel --alsologtostderr                                                                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │                     │
	│ cp      │ functional-090986 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ ssh     │ functional-090986 ssh -n functional-090986 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:05 UTC │ 06 Dec 25 09:05 UTC │
	│ addons  │ functional-090986 addons list                                                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ addons  │ functional-090986 addons list -o json                                                                                                                        │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:53:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:53:33.876279   54452 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:53:33.876426   54452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:53:33.876430   54452 out.go:374] Setting ErrFile to fd 2...
	I1206 08:53:33.876434   54452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:53:33.876677   54452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:53:33.877013   54452 out.go:368] Setting JSON to false
	I1206 08:53:33.877825   54452 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2165,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:53:33.877882   54452 start.go:143] virtualization:  
	I1206 08:53:33.881239   54452 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:53:33.885112   54452 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:53:33.885177   54452 notify.go:221] Checking for updates...
	I1206 08:53:33.891576   54452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:53:33.894372   54452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:53:33.897142   54452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:53:33.900076   54452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:53:33.902894   54452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:53:33.906249   54452 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:53:33.906348   54452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:53:33.928682   54452 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:53:33.928770   54452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:53:33.993741   54452 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 08:53:33.983085793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:53:33.993843   54452 docker.go:319] overlay module found
	I1206 08:53:33.999105   54452 out.go:179] * Using the docker driver based on existing profile
	I1206 08:53:34.002148   54452 start.go:309] selected driver: docker
	I1206 08:53:34.002159   54452 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:34.002241   54452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:53:34.002360   54452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:53:34.059754   54452 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 08:53:34.048620994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:53:34.060212   54452 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:53:34.060235   54452 cni.go:84] Creating CNI manager for ""
	I1206 08:53:34.060282   54452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:53:34.060330   54452 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:34.065569   54452 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
	I1206 08:53:34.068398   54452 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:53:34.071322   54452 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:53:34.074275   54452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:53:34.074316   54452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:53:34.074325   54452 cache.go:65] Caching tarball of preloaded images
	I1206 08:53:34.074364   54452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:53:34.074457   54452 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 08:53:34.074467   54452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 08:53:34.074577   54452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
	I1206 08:53:34.094292   54452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 08:53:34.094303   54452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 08:53:34.094322   54452 cache.go:243] Successfully downloaded all kic artifacts
	I1206 08:53:34.094352   54452 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:53:34.094428   54452 start.go:364] duration metric: took 60.843µs to acquireMachinesLock for "functional-090986"
	I1206 08:53:34.094446   54452 start.go:96] Skipping create...Using existing machine configuration
	I1206 08:53:34.094451   54452 fix.go:54] fixHost starting: 
	I1206 08:53:34.094714   54452 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
	I1206 08:53:34.110952   54452 fix.go:112] recreateIfNeeded on functional-090986: state=Running err=<nil>
	W1206 08:53:34.110973   54452 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 08:53:34.114350   54452 out.go:252] * Updating the running docker "functional-090986" container ...
	I1206 08:53:34.114380   54452 machine.go:94] provisionDockerMachine start ...
	I1206 08:53:34.114470   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.132110   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.132436   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.132441   54452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:53:34.290732   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:53:34.290745   54452 ubuntu.go:182] provisioning hostname "functional-090986"
	I1206 08:53:34.290806   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.309786   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.310075   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.310083   54452 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
	I1206 08:53:34.468771   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
	
	I1206 08:53:34.468838   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:34.492421   54452 main.go:143] libmachine: Using SSH client type: native
	I1206 08:53:34.492726   54452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1206 08:53:34.492743   54452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:53:34.643743   54452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:53:34.643757   54452 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 08:53:34.643785   54452 ubuntu.go:190] setting up certificates
	I1206 08:53:34.643793   54452 provision.go:84] configureAuth start
	I1206 08:53:34.643849   54452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:53:34.661031   54452 provision.go:143] copyHostCerts
	I1206 08:53:34.661090   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 08:53:34.661103   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 08:53:34.661173   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 08:53:34.661279   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 08:53:34.661283   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 08:53:34.661307   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 08:53:34.661364   54452 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 08:53:34.661367   54452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 08:53:34.661387   54452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 08:53:34.661440   54452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
	I1206 08:53:35.261601   54452 provision.go:177] copyRemoteCerts
	I1206 08:53:35.261659   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:53:35.261707   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.278502   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.383098   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 08:53:35.400343   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 08:53:35.417458   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 08:53:35.434271   54452 provision.go:87] duration metric: took 790.45575ms to configureAuth
	I1206 08:53:35.434289   54452 ubuntu.go:206] setting minikube options for container-runtime
	I1206 08:53:35.434485   54452 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 08:53:35.434491   54452 machine.go:97] duration metric: took 1.320106202s to provisionDockerMachine
	I1206 08:53:35.434498   54452 start.go:293] postStartSetup for "functional-090986" (driver="docker")
	I1206 08:53:35.434507   54452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:53:35.434552   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:53:35.434601   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.452073   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.559110   54452 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:53:35.562282   54452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 08:53:35.562301   54452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 08:53:35.562313   54452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 08:53:35.562372   54452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 08:53:35.562453   54452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 08:53:35.562529   54452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
	I1206 08:53:35.562578   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
	I1206 08:53:35.569704   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:53:35.586692   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
	I1206 08:53:35.603733   54452 start.go:296] duration metric: took 169.221467ms for postStartSetup
	I1206 08:53:35.603809   54452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:53:35.603847   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.620625   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.725607   54452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 08:53:35.730716   54452 fix.go:56] duration metric: took 1.636258463s for fixHost
	I1206 08:53:35.730732   54452 start.go:83] releasing machines lock for "functional-090986", held for 1.636296668s
	I1206 08:53:35.730797   54452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
	I1206 08:53:35.748170   54452 ssh_runner.go:195] Run: cat /version.json
	I1206 08:53:35.748211   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.748450   54452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:53:35.748491   54452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
	I1206 08:53:35.780618   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.788438   54452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
	I1206 08:53:35.895097   54452 ssh_runner.go:195] Run: systemctl --version
	I1206 08:53:35.994868   54452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 08:53:36.000428   54452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:53:36.000495   54452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:53:36.008950   54452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 08:53:36.008964   54452 start.go:496] detecting cgroup driver to use...
	I1206 08:53:36.008997   54452 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 08:53:36.009046   54452 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 08:53:36.024586   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 08:53:36.037573   54452 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:53:36.037628   54452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:53:36.053442   54452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:53:36.066493   54452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:53:36.187062   54452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:53:36.308311   54452 docker.go:234] disabling docker service ...
	I1206 08:53:36.308366   54452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:53:36.324390   54452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:53:36.337942   54452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:53:36.464363   54452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:53:36.601173   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:53:36.614787   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:53:36.630199   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 08:53:36.639943   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 08:53:36.649262   54452 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 08:53:36.649336   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 08:53:36.657952   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:53:36.666666   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 08:53:36.675637   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 08:53:36.684412   54452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:53:36.692740   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 08:53:36.701838   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 08:53:36.712344   54452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 08:53:36.721508   54452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:53:36.729269   54452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:53:36.736851   54452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:53:36.864978   54452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 08:53:37.021054   54452 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 08:53:37.021112   54452 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 08:53:37.025377   54452 start.go:564] Will wait 60s for crictl version
	I1206 08:53:37.025433   54452 ssh_runner.go:195] Run: which crictl
	I1206 08:53:37.029231   54452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 08:53:37.053402   54452 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 08:53:37.053462   54452 ssh_runner.go:195] Run: containerd --version
	I1206 08:53:37.077672   54452 ssh_runner.go:195] Run: containerd --version
	I1206 08:53:37.104087   54452 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 08:53:37.107051   54452 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 08:53:37.126470   54452 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 08:53:37.133471   54452 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 08:53:37.136362   54452 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:53:37.136495   54452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 08:53:37.136575   54452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:53:37.161065   54452 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:53:37.161078   54452 containerd.go:534] Images already preloaded, skipping extraction
	I1206 08:53:37.161139   54452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:53:37.189850   54452 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 08:53:37.189861   54452 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:53:37.189866   54452 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 08:53:37.189968   54452 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:53:37.190042   54452 ssh_runner.go:195] Run: sudo crictl info
	I1206 08:53:37.215125   54452 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 08:53:37.215146   54452 cni.go:84] Creating CNI manager for ""
	I1206 08:53:37.215156   54452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:53:37.215169   54452 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:53:37.215191   54452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:53:37.215303   54452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-090986"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:53:37.215394   54452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 08:53:37.223611   54452 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:53:37.223674   54452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:53:37.231742   54452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 08:53:37.245618   54452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 08:53:37.258873   54452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1206 08:53:37.272656   54452 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 08:53:37.277122   54452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:53:37.404546   54452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:53:38.220934   54452 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
	I1206 08:53:38.220945   54452 certs.go:195] generating shared ca certs ...
	I1206 08:53:38.220959   54452 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:53:38.221099   54452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 08:53:38.221148   54452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 08:53:38.221154   54452 certs.go:257] generating profile certs ...
	I1206 08:53:38.221235   54452 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
	I1206 08:53:38.221287   54452 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
	I1206 08:53:38.221325   54452 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
	I1206 08:53:38.221433   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 08:53:38.221466   54452 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 08:53:38.221473   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:53:38.221504   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 08:53:38.221527   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:53:38.221551   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 08:53:38.221601   54452 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 08:53:38.222193   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:53:38.247995   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 08:53:38.268014   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:53:38.289184   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:53:38.308825   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 08:53:38.326629   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:53:38.344198   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:53:38.361819   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:53:38.379442   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 08:53:38.397025   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:53:38.414583   54452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 08:53:38.432182   54452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:53:38.444938   54452 ssh_runner.go:195] Run: openssl version
	I1206 08:53:38.451220   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.458796   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:53:38.466335   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.470195   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.470251   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:53:38.511660   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:53:38.520107   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.527562   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 08:53:38.535252   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.539202   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.539257   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 08:53:38.580913   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 08:53:38.589267   54452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.596722   54452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 08:53:38.604956   54452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.609011   54452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.609077   54452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 08:53:38.654662   54452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 08:53:38.662094   54452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:53:38.666110   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 08:53:38.707066   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 08:53:38.748028   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 08:53:38.790291   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 08:53:38.831326   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 08:53:38.872506   54452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 08:53:38.913738   54452 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:53:38.913828   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 08:53:38.913894   54452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:53:38.941817   54452 cri.go:89] found id: ""
	I1206 08:53:38.941888   54452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:53:38.949650   54452 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 08:53:38.949660   54452 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 08:53:38.949712   54452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 08:53:38.957046   54452 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:38.957552   54452 kubeconfig.go:125] found "functional-090986" server: "https://192.168.49.2:8441"
	I1206 08:53:38.960001   54452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 08:53:38.973807   54452 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 08:39:02.953222088 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 08:53:37.265532344 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 08:53:38.973835   54452 kubeadm.go:1161] stopping kube-system containers ...
	I1206 08:53:38.973855   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 08:53:38.973990   54452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:53:39.006630   54452 cri.go:89] found id: ""
	I1206 08:53:39.006691   54452 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 08:53:39.027188   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:53:39.035115   54452 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  6 08:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 08:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 08:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  6 08:43 /etc/kubernetes/scheduler.conf
	
	I1206 08:53:39.035195   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:53:39.043346   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:53:39.051128   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.051184   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:53:39.058808   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:53:39.066431   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.066486   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:53:39.074261   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:53:39.082004   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:53:39.082060   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:53:39.089693   54452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:53:39.097973   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:39.144114   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.034967   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.247090   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.303335   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 08:53:40.358218   54452 api_server.go:52] waiting for apiserver process to appear ...
	I1206 08:53:40.358284   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:40.858753   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:41.358700   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:41.858760   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:42.359143   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:42.859214   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:43.358859   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:43.858475   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:44.358512   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:44.859201   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:45.358789   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:45.858829   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:46.358595   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:46.858465   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:47.358809   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:47.858516   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:48.358367   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:48.859203   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:49.359207   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:49.858491   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:50.359361   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:50.859136   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:51.358696   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:51.858427   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:52.358504   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:52.858356   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:53.359243   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:53.859142   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:54.359242   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:54.859316   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:55.359059   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:55.858609   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:56.359350   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:56.859078   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:57.359214   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:57.859097   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:58.359174   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:58.858946   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:59.358533   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:53:59.859078   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:00.358576   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:00.859407   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:01.358874   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:01.858512   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:02.358441   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:02.858517   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:03.359363   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:03.859400   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:04.359276   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:04.859156   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:05.358974   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:05.858357   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:06.359182   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:06.859168   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:07.359160   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:07.859209   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:08.359310   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:08.859102   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:09.358600   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:09.859219   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:10.359034   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:10.858816   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:11.358429   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:11.858433   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:12.359162   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:12.859196   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:13.358899   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:13.858468   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:14.359028   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:14.858481   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:15.359221   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:15.858792   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:16.358493   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:16.859448   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:17.359360   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:17.859153   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:18.358389   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:18.859216   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:19.359289   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:19.858488   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:20.359257   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:20.859245   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:21.359184   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:21.859040   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:22.358496   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:22.859325   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:23.358553   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:23.858649   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:24.358999   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:24.858487   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:25.359321   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:25.859061   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:26.358793   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:26.858844   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:27.358536   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:27.859274   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:28.359019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:28.858738   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:29.359019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:29.858548   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:30.358369   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:30.859081   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:31.359088   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:31.858895   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:32.359444   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:32.859328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:33.359199   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:33.858413   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:34.358493   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:34.858487   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:35.359338   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:35.858497   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:36.358475   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:36.858480   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:37.359209   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:37.858485   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:38.359088   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:38.858716   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:39.358992   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:39.859022   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:40.358688   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:40.358791   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:40.388106   54452 cri.go:89] found id: ""
	I1206 08:54:40.388120   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.388134   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:40.388140   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:40.388201   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:40.412432   54452 cri.go:89] found id: ""
	I1206 08:54:40.412446   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.412453   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:40.412458   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:40.412515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:40.436247   54452 cri.go:89] found id: ""
	I1206 08:54:40.436261   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.436268   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:40.436274   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:40.436334   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:40.461648   54452 cri.go:89] found id: ""
	I1206 08:54:40.461662   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.461669   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:40.461674   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:40.461731   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:40.490826   54452 cri.go:89] found id: ""
	I1206 08:54:40.490840   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.490846   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:40.490851   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:40.490912   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:40.517246   54452 cri.go:89] found id: ""
	I1206 08:54:40.517259   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.517266   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:40.517272   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:40.517331   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:40.542129   54452 cri.go:89] found id: ""
	I1206 08:54:40.542144   54452 logs.go:282] 0 containers: []
	W1206 08:54:40.542150   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:40.542157   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:40.542167   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:40.599816   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:40.599836   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:40.610692   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:40.610709   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:40.681214   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:40.671721   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673072   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673914   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.675628   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.676278   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:40.671721   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673072   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.673914   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.675628   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:40.676278   10784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:40.681229   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:40.681240   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:40.746611   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:40.746631   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:43.275588   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:43.286822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:43.286894   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:43.313760   54452 cri.go:89] found id: ""
	I1206 08:54:43.313779   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.313786   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:43.313793   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:43.313852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:43.338174   54452 cri.go:89] found id: ""
	I1206 08:54:43.338188   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.338203   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:43.338208   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:43.338278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:43.362249   54452 cri.go:89] found id: ""
	I1206 08:54:43.362263   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.362270   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:43.362275   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:43.362333   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:43.386332   54452 cri.go:89] found id: ""
	I1206 08:54:43.386345   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.386353   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:43.386358   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:43.386413   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:43.413265   54452 cri.go:89] found id: ""
	I1206 08:54:43.413278   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.413285   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:43.413290   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:43.413346   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:43.437411   54452 cri.go:89] found id: ""
	I1206 08:54:43.437424   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.437431   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:43.437436   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:43.437497   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:43.463006   54452 cri.go:89] found id: ""
	I1206 08:54:43.463019   54452 logs.go:282] 0 containers: []
	W1206 08:54:43.463046   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:43.463054   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:43.463065   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:43.531909   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:43.523361   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.524077   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525611   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525984   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.527554   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:43.523361   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.524077   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525611   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.525984   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:43.527554   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:43.531920   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:43.531930   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:43.596428   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:43.596447   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:43.625653   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:43.625669   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:43.685656   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:43.685675   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:46.197048   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:46.207403   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:46.207468   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:46.259332   54452 cri.go:89] found id: ""
	I1206 08:54:46.259345   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.259361   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:46.259367   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:46.259453   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:46.293591   54452 cri.go:89] found id: ""
	I1206 08:54:46.293604   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.293611   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:46.293616   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:46.293674   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:46.324320   54452 cri.go:89] found id: ""
	I1206 08:54:46.324333   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.324340   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:46.324345   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:46.324403   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:46.349505   54452 cri.go:89] found id: ""
	I1206 08:54:46.349519   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.349526   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:46.349531   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:46.349592   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:46.372944   54452 cri.go:89] found id: ""
	I1206 08:54:46.372958   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.372965   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:46.372970   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:46.373028   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:46.397863   54452 cri.go:89] found id: ""
	I1206 08:54:46.397876   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.397884   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:46.397889   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:46.397947   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:46.423405   54452 cri.go:89] found id: ""
	I1206 08:54:46.423419   54452 logs.go:282] 0 containers: []
	W1206 08:54:46.423426   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:46.423434   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:46.423444   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:46.479557   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:46.479577   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:46.490975   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:46.490992   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:46.555476   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:46.546289   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.547116   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.548919   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.549655   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.551369   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:46.546289   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.547116   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.548919   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.549655   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:46.551369   10992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:46.555486   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:46.555499   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:46.617650   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:46.617666   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:49.145146   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:49.156935   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:49.157011   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:49.181313   54452 cri.go:89] found id: ""
	I1206 08:54:49.181327   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.181334   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:49.181339   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:49.181396   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:49.205770   54452 cri.go:89] found id: ""
	I1206 08:54:49.205783   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.205792   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:49.205797   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:49.205854   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:49.246208   54452 cri.go:89] found id: ""
	I1206 08:54:49.246232   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.246240   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:49.246245   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:49.246312   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:49.276707   54452 cri.go:89] found id: ""
	I1206 08:54:49.276720   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.276739   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:49.276744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:49.276817   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:49.304665   54452 cri.go:89] found id: ""
	I1206 08:54:49.304684   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.304691   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:49.304696   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:49.304754   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:49.329874   54452 cri.go:89] found id: ""
	I1206 08:54:49.329888   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.329895   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:49.329901   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:49.329967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:49.355459   54452 cri.go:89] found id: ""
	I1206 08:54:49.355473   54452 logs.go:282] 0 containers: []
	W1206 08:54:49.355480   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:49.355487   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:49.355503   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:49.383334   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:49.383349   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:49.438134   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:49.438151   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:49.449298   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:49.449313   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:49.517360   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:49.507622   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.508394   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510126   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510650   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.512155   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:49.507622   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.508394   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510126   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.510650   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:49.512155   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:49.517370   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:49.517380   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:52.080828   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:52.091103   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:52.091181   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:52.116535   54452 cri.go:89] found id: ""
	I1206 08:54:52.116549   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.116556   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:52.116570   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:52.116633   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:52.142398   54452 cri.go:89] found id: ""
	I1206 08:54:52.142412   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.142424   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:52.142429   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:52.142485   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:52.169937   54452 cri.go:89] found id: ""
	I1206 08:54:52.169951   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.169958   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:52.169963   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:52.170020   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:52.200818   54452 cri.go:89] found id: ""
	I1206 08:54:52.200832   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.200838   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:52.200843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:52.200899   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:52.228819   54452 cri.go:89] found id: ""
	I1206 08:54:52.228833   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.228841   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:52.228846   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:52.228908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:52.258951   54452 cri.go:89] found id: ""
	I1206 08:54:52.258964   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.258972   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:52.258977   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:52.259042   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:52.294986   54452 cri.go:89] found id: ""
	I1206 08:54:52.295000   54452 logs.go:282] 0 containers: []
	W1206 08:54:52.295007   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:52.295015   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:52.295025   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:52.362225   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:52.362245   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:52.389713   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:52.389729   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:52.445119   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:52.445137   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:52.458958   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:52.458980   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:52.523486   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:52.514851   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.515698   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517261   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517893   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.519458   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:52.514851   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.515698   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517261   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.517893   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:52.519458   11216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:55.023766   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:55.034751   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:55.034820   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:55.060938   54452 cri.go:89] found id: ""
	I1206 08:54:55.060952   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.060960   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:55.060965   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:55.061025   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:55.086352   54452 cri.go:89] found id: ""
	I1206 08:54:55.086365   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.086383   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:55.086389   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:55.086457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:55.111318   54452 cri.go:89] found id: ""
	I1206 08:54:55.111334   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.111341   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:55.111346   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:55.111427   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:55.140103   54452 cri.go:89] found id: ""
	I1206 08:54:55.140118   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.140125   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:55.140130   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:55.140194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:55.164478   54452 cri.go:89] found id: ""
	I1206 08:54:55.164492   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.164500   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:55.164505   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:55.164565   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:55.191182   54452 cri.go:89] found id: ""
	I1206 08:54:55.191195   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.191203   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:55.191209   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:55.191266   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:55.216083   54452 cri.go:89] found id: ""
	I1206 08:54:55.216097   54452 logs.go:282] 0 containers: []
	W1206 08:54:55.216104   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:55.216111   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:55.216122   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:55.303982   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:55.294944   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.295756   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.297492   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.298117   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.299945   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:55.294944   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.295756   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.297492   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.298117   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:55.299945   11302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:55.303992   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:55.304003   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:55.365857   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:55.365875   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:55.393911   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:55.393928   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:54:55.455110   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:55.455129   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:57.967188   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:54:57.977408   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:54:57.977467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:54:58.003574   54452 cri.go:89] found id: ""
	I1206 08:54:58.003588   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.003596   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:54:58.003601   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:54:58.003662   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:54:58.029323   54452 cri.go:89] found id: ""
	I1206 08:54:58.029337   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.029344   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:54:58.029348   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:54:58.029408   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:54:58.054996   54452 cri.go:89] found id: ""
	I1206 08:54:58.055010   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.055018   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:54:58.055023   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:54:58.055087   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:54:58.079698   54452 cri.go:89] found id: ""
	I1206 08:54:58.079711   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.079718   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:54:58.079723   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:54:58.079785   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:54:58.106383   54452 cri.go:89] found id: ""
	I1206 08:54:58.106396   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.106403   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:54:58.106408   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:54:58.106467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:54:58.135301   54452 cri.go:89] found id: ""
	I1206 08:54:58.135315   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.135325   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:54:58.135330   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:54:58.135431   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:54:58.165240   54452 cri.go:89] found id: ""
	I1206 08:54:58.165255   54452 logs.go:282] 0 containers: []
	W1206 08:54:58.165262   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:54:58.165269   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:54:58.165279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:54:58.176468   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:54:58.176483   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:54:58.263783   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:54:58.246836   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.247297   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.255628   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.256461   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.259475   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:54:58.246836   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.247297   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.255628   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.256461   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:54:58.259475   11404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:54:58.263793   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:54:58.263806   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:54:58.336059   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:54:58.336078   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:54:58.364550   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:54:58.364565   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:00.926395   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:00.936607   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:00.936669   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:00.961767   54452 cri.go:89] found id: ""
	I1206 08:55:00.961781   54452 logs.go:282] 0 containers: []
	W1206 08:55:00.961788   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:00.961793   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:00.961855   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:00.987655   54452 cri.go:89] found id: ""
	I1206 08:55:00.987671   54452 logs.go:282] 0 containers: []
	W1206 08:55:00.987678   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:00.987684   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:00.987753   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:01.017321   54452 cri.go:89] found id: ""
	I1206 08:55:01.017335   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.017342   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:01.017347   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:01.017405   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:01.043120   54452 cri.go:89] found id: ""
	I1206 08:55:01.043134   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.043140   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:01.043146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:01.043208   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:01.069934   54452 cri.go:89] found id: ""
	I1206 08:55:01.069951   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.069958   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:01.069967   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:01.070037   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:01.095743   54452 cri.go:89] found id: ""
	I1206 08:55:01.095757   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.095765   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:01.095772   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:01.095832   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:01.120915   54452 cri.go:89] found id: ""
	I1206 08:55:01.120933   54452 logs.go:282] 0 containers: []
	W1206 08:55:01.120940   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:01.120948   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:01.120958   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:01.179366   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:01.179392   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:01.191802   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:01.191818   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:01.292667   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:01.282943   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284116   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284837   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.286639   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.287228   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:01.282943   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284116   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.284837   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.286639   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:01.287228   11505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:01.292676   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:01.292687   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:01.357710   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:01.357729   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:03.889702   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:03.900135   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:03.900194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:03.926097   54452 cri.go:89] found id: ""
	I1206 08:55:03.926122   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.926129   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:03.926135   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:03.926204   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:03.950796   54452 cri.go:89] found id: ""
	I1206 08:55:03.950810   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.950818   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:03.950823   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:03.950881   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:03.976998   54452 cri.go:89] found id: ""
	I1206 08:55:03.977012   54452 logs.go:282] 0 containers: []
	W1206 08:55:03.977018   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:03.977024   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:03.977083   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:04.004847   54452 cri.go:89] found id: ""
	I1206 08:55:04.004862   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.004870   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:04.004876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:04.004943   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:04.030715   54452 cri.go:89] found id: ""
	I1206 08:55:04.030729   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.030737   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:04.030742   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:04.030806   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:04.056324   54452 cri.go:89] found id: ""
	I1206 08:55:04.056338   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.056345   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:04.056351   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:04.056412   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:04.082124   54452 cri.go:89] found id: ""
	I1206 08:55:04.082137   54452 logs.go:282] 0 containers: []
	W1206 08:55:04.082145   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:04.082152   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:04.082163   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:04.138719   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:04.138737   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:04.150252   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:04.150269   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:04.220848   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:04.209917   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.210563   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212138   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212692   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.214187   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:04.209917   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.210563   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212138   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.212692   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:04.214187   11609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:04.220858   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:04.220868   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:04.293646   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:04.293665   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:06.823180   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:06.833518   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:06.833576   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:06.863092   54452 cri.go:89] found id: ""
	I1206 08:55:06.863106   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.863113   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:06.863119   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:06.863177   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:06.888504   54452 cri.go:89] found id: ""
	I1206 08:55:06.888519   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.888525   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:06.888530   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:06.888595   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:06.918175   54452 cri.go:89] found id: ""
	I1206 08:55:06.918189   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.918197   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:06.918202   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:06.918261   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:06.944460   54452 cri.go:89] found id: ""
	I1206 08:55:06.944473   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.944480   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:06.944485   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:06.944551   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:06.973765   54452 cri.go:89] found id: ""
	I1206 08:55:06.973778   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.973786   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:06.973791   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:06.973852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:06.999311   54452 cri.go:89] found id: ""
	I1206 08:55:06.999324   54452 logs.go:282] 0 containers: []
	W1206 08:55:06.999331   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:06.999337   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:06.999415   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:07.027677   54452 cri.go:89] found id: ""
	I1206 08:55:07.027690   54452 logs.go:282] 0 containers: []
	W1206 08:55:07.027697   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:07.027705   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:07.027715   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:07.086320   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:07.086338   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:07.097607   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:07.097623   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:07.161897   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:07.153185   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.154007   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.155730   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.156339   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.158053   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:07.153185   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.154007   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.155730   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.156339   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:07.158053   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:07.161907   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:07.161919   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:07.224772   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:07.224792   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:09.768328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:09.778939   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:09.779000   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:09.805473   54452 cri.go:89] found id: ""
	I1206 08:55:09.805487   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.805494   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:09.805499   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:09.805557   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:09.830605   54452 cri.go:89] found id: ""
	I1206 08:55:09.830618   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.830625   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:09.830630   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:09.830689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:09.855855   54452 cri.go:89] found id: ""
	I1206 08:55:09.855869   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.855876   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:09.855881   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:09.855937   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:09.880900   54452 cri.go:89] found id: ""
	I1206 08:55:09.880913   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.880920   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:09.880925   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:09.880981   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:09.906796   54452 cri.go:89] found id: ""
	I1206 08:55:09.906810   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.906817   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:09.906822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:09.906882   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:09.932980   54452 cri.go:89] found id: ""
	I1206 08:55:09.932996   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.933004   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:09.933009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:09.933081   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:09.961870   54452 cri.go:89] found id: ""
	I1206 08:55:09.961884   54452 logs.go:282] 0 containers: []
	W1206 08:55:09.961892   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:09.961900   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:09.961922   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:10.018106   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:10.018129   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:10.031414   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:10.031441   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:10.103678   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:10.092903   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.093952   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.095756   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.096440   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.098120   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:10.092903   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.093952   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.095756   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.096440   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:10.098120   11820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:10.103689   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:10.103700   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:10.167044   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:10.167063   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:12.697325   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:12.707894   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:12.707958   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:12.732888   54452 cri.go:89] found id: ""
	I1206 08:55:12.732902   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.732914   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:12.732919   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:12.732975   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:12.756939   54452 cri.go:89] found id: ""
	I1206 08:55:12.756953   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.756960   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:12.756965   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:12.757026   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:12.785954   54452 cri.go:89] found id: ""
	I1206 08:55:12.785967   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.785974   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:12.785979   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:12.786037   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:12.810560   54452 cri.go:89] found id: ""
	I1206 08:55:12.810574   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.810581   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:12.810586   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:12.810643   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:12.835829   54452 cri.go:89] found id: ""
	I1206 08:55:12.835844   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.835851   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:12.835856   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:12.835917   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:12.860638   54452 cri.go:89] found id: ""
	I1206 08:55:12.860653   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.860660   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:12.860665   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:12.860723   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:12.885721   54452 cri.go:89] found id: ""
	I1206 08:55:12.885734   54452 logs.go:282] 0 containers: []
	W1206 08:55:12.885742   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:12.885750   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:12.885760   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:12.944772   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:12.944793   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:12.956560   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:12.956577   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:13.023566   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:13.013901   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.014692   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.016414   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.017110   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.019101   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:13.013901   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.014692   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.016414   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.017110   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:13.019101   11927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:13.023586   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:13.023596   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:13.086592   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:13.086612   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:15.617835   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:15.628437   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:15.628524   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:15.657209   54452 cri.go:89] found id: ""
	I1206 08:55:15.657223   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.657230   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:15.657235   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:15.657297   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:15.681664   54452 cri.go:89] found id: ""
	I1206 08:55:15.681678   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.681685   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:15.681690   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:15.681748   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:15.707568   54452 cri.go:89] found id: ""
	I1206 08:55:15.707581   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.707588   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:15.707594   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:15.707654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:15.733456   54452 cri.go:89] found id: ""
	I1206 08:55:15.733470   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.733493   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:15.733499   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:15.733558   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:15.758882   54452 cri.go:89] found id: ""
	I1206 08:55:15.758896   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.758903   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:15.758908   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:15.758967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:15.784184   54452 cri.go:89] found id: ""
	I1206 08:55:15.784198   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.784205   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:15.784210   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:15.784269   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:15.809166   54452 cri.go:89] found id: ""
	I1206 08:55:15.809178   54452 logs.go:282] 0 containers: []
	W1206 08:55:15.809186   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:15.809194   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:15.809204   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:15.865479   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:15.865498   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:15.876370   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:15.876386   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:15.949255   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:15.940278   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.941080   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.942741   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.943482   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.945251   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:15.940278   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.941080   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.942741   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.943482   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:15.945251   12031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:15.949277   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:15.949289   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:16.012838   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:16.012858   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:18.547536   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:18.557857   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:18.557924   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:18.583107   54452 cri.go:89] found id: ""
	I1206 08:55:18.583120   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.583128   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:18.583132   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:18.583192   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:18.608251   54452 cri.go:89] found id: ""
	I1206 08:55:18.608264   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.608271   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:18.608276   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:18.608333   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:18.634059   54452 cri.go:89] found id: ""
	I1206 08:55:18.634073   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.634080   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:18.634085   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:18.634158   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:18.659252   54452 cri.go:89] found id: ""
	I1206 08:55:18.659266   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.659273   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:18.659278   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:18.659338   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:18.687529   54452 cri.go:89] found id: ""
	I1206 08:55:18.687542   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.687549   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:18.687554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:18.687611   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:18.716705   54452 cri.go:89] found id: ""
	I1206 08:55:18.716719   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.716726   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:18.716731   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:18.716790   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:18.743861   54452 cri.go:89] found id: ""
	I1206 08:55:18.743875   54452 logs.go:282] 0 containers: []
	W1206 08:55:18.743882   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:18.743890   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:18.743900   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:18.800501   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:18.800520   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:18.811514   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:18.811531   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:18.877593   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:18.868814   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.869581   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871247   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871996   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.873734   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:18.868814   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.869581   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871247   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.871996   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:18.873734   12138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:18.877603   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:18.877614   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:18.945147   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:18.945175   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:21.473372   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:21.484974   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:21.485036   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:21.516585   54452 cri.go:89] found id: ""
	I1206 08:55:21.516598   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.516606   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:21.516611   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:21.516670   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:21.543917   54452 cri.go:89] found id: ""
	I1206 08:55:21.543930   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.543937   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:21.543943   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:21.544006   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:21.581932   54452 cri.go:89] found id: ""
	I1206 08:55:21.581946   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.581953   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:21.581958   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:21.582017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:21.606796   54452 cri.go:89] found id: ""
	I1206 08:55:21.606810   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.606817   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:21.606822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:21.606885   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:21.632673   54452 cri.go:89] found id: ""
	I1206 08:55:21.632686   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.632693   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:21.632698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:21.632791   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:21.656595   54452 cri.go:89] found id: ""
	I1206 08:55:21.656609   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.656616   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:21.656621   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:21.656681   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:21.681710   54452 cri.go:89] found id: ""
	I1206 08:55:21.681723   54452 logs.go:282] 0 containers: []
	W1206 08:55:21.681730   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:21.681738   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:21.681747   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:21.737731   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:21.737750   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:21.748929   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:21.748944   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:21.814714   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:21.804423   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.805260   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807123   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807866   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.809673   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:21.804423   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.805260   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807123   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.807866   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:21.809673   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:21.814725   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:21.814737   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:21.878842   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:21.878860   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:24.408240   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:24.418359   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:24.418420   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:24.445088   54452 cri.go:89] found id: ""
	I1206 08:55:24.445102   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.445109   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:24.445115   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:24.445218   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:24.481785   54452 cri.go:89] found id: ""
	I1206 08:55:24.481799   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.481807   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:24.481812   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:24.481871   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:24.514861   54452 cri.go:89] found id: ""
	I1206 08:55:24.514875   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.514882   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:24.514888   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:24.514951   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:24.545514   54452 cri.go:89] found id: ""
	I1206 08:55:24.545528   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.545535   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:24.545540   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:24.545604   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:24.571688   54452 cri.go:89] found id: ""
	I1206 08:55:24.571703   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.571710   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:24.571715   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:24.571780   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:24.596172   54452 cri.go:89] found id: ""
	I1206 08:55:24.596192   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.596200   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:24.596205   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:24.596267   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:24.621684   54452 cri.go:89] found id: ""
	I1206 08:55:24.621698   54452 logs.go:282] 0 containers: []
	W1206 08:55:24.621706   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:24.621713   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:24.621728   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:24.683261   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:24.683279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:24.717098   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:24.717115   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:24.774777   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:24.774797   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:24.786405   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:24.786422   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:24.852542   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:24.844316   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.844764   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846310   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846629   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.848126   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:24.844316   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.844764   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846310   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.846629   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:24.848126   12363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:27.352798   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:27.363390   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:27.363453   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:27.390863   54452 cri.go:89] found id: ""
	I1206 08:55:27.390877   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.390884   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:27.390891   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:27.390950   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:27.419763   54452 cri.go:89] found id: ""
	I1206 08:55:27.419777   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.419784   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:27.419789   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:27.419843   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:27.443855   54452 cri.go:89] found id: ""
	I1206 08:55:27.443868   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.443875   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:27.443880   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:27.443937   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:27.472073   54452 cri.go:89] found id: ""
	I1206 08:55:27.472086   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.472093   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:27.472099   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:27.472157   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:27.505330   54452 cri.go:89] found id: ""
	I1206 08:55:27.505344   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.505352   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:27.505357   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:27.505414   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:27.533936   54452 cri.go:89] found id: ""
	I1206 08:55:27.533950   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.533957   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:27.533962   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:27.534017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:27.562283   54452 cri.go:89] found id: ""
	I1206 08:55:27.562296   54452 logs.go:282] 0 containers: []
	W1206 08:55:27.562303   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:27.562311   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:27.562320   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:27.619092   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:27.619110   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:27.630324   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:27.630339   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:27.695241   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:27.686898   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.687546   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689114   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689707   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.691358   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:27.686898   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.687546   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689114   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.689707   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:27.691358   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:27.695251   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:27.695266   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:27.757877   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:27.757895   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:30.286157   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:30.296567   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:30.296625   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:30.321390   54452 cri.go:89] found id: ""
	I1206 08:55:30.321405   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.321413   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:30.321418   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:30.321480   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:30.350054   54452 cri.go:89] found id: ""
	I1206 08:55:30.350068   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.350075   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:30.350083   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:30.350149   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:30.375330   54452 cri.go:89] found id: ""
	I1206 08:55:30.375350   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.375358   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:30.375363   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:30.375445   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:30.406133   54452 cri.go:89] found id: ""
	I1206 08:55:30.406146   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.406153   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:30.406158   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:30.406217   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:30.434180   54452 cri.go:89] found id: ""
	I1206 08:55:30.434195   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.434202   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:30.434207   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:30.434272   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:30.461023   54452 cri.go:89] found id: ""
	I1206 08:55:30.461037   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.461044   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:30.461049   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:30.461107   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:30.493229   54452 cri.go:89] found id: ""
	I1206 08:55:30.493243   54452 logs.go:282] 0 containers: []
	W1206 08:55:30.493250   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:30.493268   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:30.493279   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:30.556454   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:30.556473   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:30.567243   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:30.567258   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:30.630618   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:30.622515   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.623325   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.624965   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.625291   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.626789   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:30.622515   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.623325   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.624965   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.625291   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:30.626789   12562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:30.630628   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:30.630638   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:30.692365   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:30.692384   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:33.222243   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:33.233203   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:33.233264   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:33.259086   54452 cri.go:89] found id: ""
	I1206 08:55:33.259099   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.259107   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:33.259113   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:33.259175   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:33.285885   54452 cri.go:89] found id: ""
	I1206 08:55:33.285912   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.285920   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:33.285926   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:33.286002   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:33.313522   54452 cri.go:89] found id: ""
	I1206 08:55:33.313536   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.313543   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:33.313554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:33.313614   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:33.343303   54452 cri.go:89] found id: ""
	I1206 08:55:33.343318   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.343335   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:33.343341   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:33.343434   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:33.372461   54452 cri.go:89] found id: ""
	I1206 08:55:33.372475   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.372482   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:33.372488   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:33.372556   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:33.398660   54452 cri.go:89] found id: ""
	I1206 08:55:33.398674   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.398682   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:33.398695   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:33.398770   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:33.425653   54452 cri.go:89] found id: ""
	I1206 08:55:33.425667   54452 logs.go:282] 0 containers: []
	W1206 08:55:33.425675   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:33.425683   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:33.425693   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:33.436575   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:33.436591   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:33.519919   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:33.509857   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.511357   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.512054   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.513835   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.514450   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:33.509857   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.511357   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.512054   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.513835   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:33.514450   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:33.519928   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:33.519939   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:33.584991   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:33.585010   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:33.617158   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:33.617175   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:36.180867   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:36.191295   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:36.191369   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:36.215504   54452 cri.go:89] found id: ""
	I1206 08:55:36.215518   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.215525   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:36.215530   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:36.215586   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:36.241860   54452 cri.go:89] found id: ""
	I1206 08:55:36.241874   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.241881   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:36.241886   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:36.241948   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:36.270206   54452 cri.go:89] found id: ""
	I1206 08:55:36.270220   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.270227   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:36.270232   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:36.270292   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:36.297638   54452 cri.go:89] found id: ""
	I1206 08:55:36.297651   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.297658   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:36.297663   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:36.297721   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:36.327655   54452 cri.go:89] found id: ""
	I1206 08:55:36.327681   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.327689   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:36.327694   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:36.327764   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:36.353797   54452 cri.go:89] found id: ""
	I1206 08:55:36.353811   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.353818   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:36.353825   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:36.353884   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:36.378781   54452 cri.go:89] found id: ""
	I1206 08:55:36.378795   54452 logs.go:282] 0 containers: []
	W1206 08:55:36.378802   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:36.378810   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:36.378823   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:36.435517   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:36.435537   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:36.446663   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:36.446679   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:36.538183   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:36.527758   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.528583   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.530703   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.531276   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.534098   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:36.527758   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.528583   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.530703   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.531276   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:36.534098   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:36.538193   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:36.538203   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:36.601364   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:36.601383   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:39.129686   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:39.140306   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:39.140375   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:39.169862   54452 cri.go:89] found id: ""
	I1206 08:55:39.169876   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.169883   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:39.169889   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:39.169952   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:39.195755   54452 cri.go:89] found id: ""
	I1206 08:55:39.195771   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.195778   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:39.195784   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:39.195842   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:39.220719   54452 cri.go:89] found id: ""
	I1206 08:55:39.220732   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.220739   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:39.220744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:39.220801   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:39.249535   54452 cri.go:89] found id: ""
	I1206 08:55:39.249549   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.249556   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:39.249561   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:39.249620   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:39.281267   54452 cri.go:89] found id: ""
	I1206 08:55:39.281281   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.281288   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:39.281293   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:39.281379   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:39.306847   54452 cri.go:89] found id: ""
	I1206 08:55:39.306860   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.306867   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:39.306873   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:39.306933   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:39.334023   54452 cri.go:89] found id: ""
	I1206 08:55:39.334036   54452 logs.go:282] 0 containers: []
	W1206 08:55:39.334057   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:39.334064   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:39.334073   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:39.363589   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:39.363604   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:39.420152   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:39.420169   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:39.430815   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:39.430830   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:39.513246   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:39.503975   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.504808   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.506588   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.507212   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.508933   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:39.503975   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.504808   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.506588   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.507212   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:39.508933   12887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:39.513256   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:39.513266   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:42.085786   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:42.098317   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:42.098387   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:42.134671   54452 cri.go:89] found id: ""
	I1206 08:55:42.134686   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.134695   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:42.134705   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:42.134775   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:42.167474   54452 cri.go:89] found id: ""
	I1206 08:55:42.167489   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.167498   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:42.167505   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:42.167575   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:42.202078   54452 cri.go:89] found id: ""
	I1206 08:55:42.202093   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.202100   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:42.202106   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:42.202171   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:42.228525   54452 cri.go:89] found id: ""
	I1206 08:55:42.228539   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.228546   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:42.228552   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:42.228621   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:42.257322   54452 cri.go:89] found id: ""
	I1206 08:55:42.257337   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.257344   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:42.257350   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:42.257457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:42.284221   54452 cri.go:89] found id: ""
	I1206 08:55:42.284235   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.284253   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:42.284259   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:42.284329   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:42.311654   54452 cri.go:89] found id: ""
	I1206 08:55:42.311668   54452 logs.go:282] 0 containers: []
	W1206 08:55:42.311675   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:42.311683   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:42.311694   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:42.368273   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:42.368294   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:42.379477   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:42.379493   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:42.443515   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:42.434726   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.435504   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437161   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437774   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.439236   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:42.434726   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.435504   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437161   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.437774   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:42.439236   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:42.443526   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:42.443543   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:42.512858   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:42.512878   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:45.043040   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:45.068009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:45.068076   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:45.109801   54452 cri.go:89] found id: ""
	I1206 08:55:45.109815   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.109823   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:45.109829   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:45.109896   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:45.149825   54452 cri.go:89] found id: ""
	I1206 08:55:45.149841   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.149849   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:45.149855   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:45.149929   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:45.187416   54452 cri.go:89] found id: ""
	I1206 08:55:45.187433   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.187441   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:45.187446   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:45.187520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:45.235891   54452 cri.go:89] found id: ""
	I1206 08:55:45.235908   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.235916   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:45.235922   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:45.236066   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:45.279650   54452 cri.go:89] found id: ""
	I1206 08:55:45.279665   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.279673   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:45.279681   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:45.279750   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:45.325794   54452 cri.go:89] found id: ""
	I1206 08:55:45.325844   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.325871   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:45.325893   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:45.325962   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:45.357237   54452 cri.go:89] found id: ""
	I1206 08:55:45.357251   54452 logs.go:282] 0 containers: []
	W1206 08:55:45.357258   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:45.357266   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:45.357291   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:45.385704   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:45.385720   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:45.442819   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:45.442837   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:45.454504   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:45.454523   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:45.547110   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:45.538939   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.539311   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.540633   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.541395   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.542989   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:45.538939   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.539311   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.540633   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.541395   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:45.542989   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:45.547119   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:45.547133   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:48.116344   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:48.126956   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:48.127022   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:48.152657   54452 cri.go:89] found id: ""
	I1206 08:55:48.152671   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.152678   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:48.152684   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:48.152743   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:48.182395   54452 cri.go:89] found id: ""
	I1206 08:55:48.182409   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.182417   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:48.182422   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:48.182494   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:48.211297   54452 cri.go:89] found id: ""
	I1206 08:55:48.211310   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.211327   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:48.211333   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:48.211402   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:48.236544   54452 cri.go:89] found id: ""
	I1206 08:55:48.236558   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.236565   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:48.236571   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:48.236627   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:48.262553   54452 cri.go:89] found id: ""
	I1206 08:55:48.262570   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.262582   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:48.262587   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:48.262680   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:48.295466   54452 cri.go:89] found id: ""
	I1206 08:55:48.295488   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.295495   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:48.295506   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:48.295586   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:48.321818   54452 cri.go:89] found id: ""
	I1206 08:55:48.321830   54452 logs.go:282] 0 containers: []
	W1206 08:55:48.321837   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:48.321845   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:48.321856   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:48.378211   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:48.378229   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:48.389232   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:48.389255   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:48.456700   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:48.448592   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.449583   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.450577   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.451171   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.452831   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:48.448592   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.449583   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.450577   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.451171   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:48.452831   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:48.456711   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:48.456720   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:48.523317   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:48.523335   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:51.052796   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:51.063850   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:51.063912   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:51.089613   54452 cri.go:89] found id: ""
	I1206 08:55:51.089628   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.089635   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:51.089643   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:51.089727   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:51.116588   54452 cri.go:89] found id: ""
	I1206 08:55:51.116601   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.116609   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:51.116614   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:51.116679   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:51.146172   54452 cri.go:89] found id: ""
	I1206 08:55:51.146186   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.146193   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:51.146199   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:51.146266   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:51.172046   54452 cri.go:89] found id: ""
	I1206 08:55:51.172071   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.172078   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:51.172084   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:51.172163   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:51.200464   54452 cri.go:89] found id: ""
	I1206 08:55:51.200477   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.200495   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:51.200501   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:51.200561   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:51.229170   54452 cri.go:89] found id: ""
	I1206 08:55:51.229184   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.229191   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:51.229196   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:51.229254   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:51.254375   54452 cri.go:89] found id: ""
	I1206 08:55:51.254389   54452 logs.go:282] 0 containers: []
	W1206 08:55:51.254396   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:51.254403   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:51.254413   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:51.317370   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:51.317390   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:51.344624   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:51.344642   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:51.402739   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:51.402759   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:51.413613   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:51.413629   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:51.483207   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:51.470850   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.471424   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.472991   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.473416   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.475113   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:51.470850   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.471424   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.472991   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.473416   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:51.475113   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:53.983859   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:53.997260   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:53.997326   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:54.024774   54452 cri.go:89] found id: ""
	I1206 08:55:54.024788   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.024795   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:54.024801   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:54.024866   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:54.050802   54452 cri.go:89] found id: ""
	I1206 08:55:54.050830   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.050837   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:54.050842   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:54.050911   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:54.079419   54452 cri.go:89] found id: ""
	I1206 08:55:54.079433   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.079440   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:54.079446   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:54.079517   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:54.104851   54452 cri.go:89] found id: ""
	I1206 08:55:54.104864   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.104871   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:54.104876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:54.104933   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:54.133815   54452 cri.go:89] found id: ""
	I1206 08:55:54.133829   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.133847   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:54.133853   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:54.133909   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:54.163047   54452 cri.go:89] found id: ""
	I1206 08:55:54.163071   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.163078   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:54.163083   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:54.163150   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:54.190227   54452 cri.go:89] found id: ""
	I1206 08:55:54.190242   54452 logs.go:282] 0 containers: []
	W1206 08:55:54.190249   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:54.190263   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:54.190273   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:54.246189   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:54.246208   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:54.257068   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:54.257083   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:54.322094   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:54.313214   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.313895   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.315763   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.316388   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.318125   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:54.313214   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.313895   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.315763   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.316388   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:54.318125   13403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:54.322104   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:54.322114   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:54.385131   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:54.385150   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:56.917265   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:56.927438   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:56.927499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:56.951596   54452 cri.go:89] found id: ""
	I1206 08:55:56.951611   54452 logs.go:282] 0 containers: []
	W1206 08:55:56.951618   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:56.951623   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:56.951685   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:56.975635   54452 cri.go:89] found id: ""
	I1206 08:55:56.975649   54452 logs.go:282] 0 containers: []
	W1206 08:55:56.975656   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:56.975661   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:56.975718   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:57.005275   54452 cri.go:89] found id: ""
	I1206 08:55:57.005289   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.005296   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:57.005302   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:57.005370   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:57.031301   54452 cri.go:89] found id: ""
	I1206 08:55:57.031315   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.031333   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:57.031339   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:57.031422   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:57.057133   54452 cri.go:89] found id: ""
	I1206 08:55:57.057146   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.057153   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:57.057159   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:57.057221   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:55:57.081358   54452 cri.go:89] found id: ""
	I1206 08:55:57.081371   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.081378   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:55:57.081384   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:55:57.081442   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:55:57.116018   54452 cri.go:89] found id: ""
	I1206 08:55:57.116033   54452 logs.go:282] 0 containers: []
	W1206 08:55:57.116049   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:55:57.116057   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:55:57.116067   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:55:57.171598   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:55:57.171615   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:55:57.182153   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:55:57.182169   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:55:57.245457   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:55:57.237416   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.237828   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.239402   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.240057   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.241674   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:55:57.237416   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.237828   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.239402   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.240057   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:55:57.241674   13507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:55:57.245466   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:55:57.245476   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:55:57.307969   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:55:57.307987   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:55:59.836840   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:59.846983   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:55:59.847044   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:55:59.871818   54452 cri.go:89] found id: ""
	I1206 08:55:59.871831   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.871838   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:55:59.871844   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:55:59.871904   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:55:59.896695   54452 cri.go:89] found id: ""
	I1206 08:55:59.896709   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.896716   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:55:59.896721   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:55:59.896787   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:55:59.921887   54452 cri.go:89] found id: ""
	I1206 08:55:59.921911   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.921918   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:55:59.921924   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:55:59.921998   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:55:59.948824   54452 cri.go:89] found id: ""
	I1206 08:55:59.948837   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.948845   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:55:59.948850   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:55:59.948908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:55:59.974553   54452 cri.go:89] found id: ""
	I1206 08:55:59.974567   54452 logs.go:282] 0 containers: []
	W1206 08:55:59.974575   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:55:59.974580   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:55:59.974638   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:00.057731   54452 cri.go:89] found id: ""
	I1206 08:56:00.057783   54452 logs.go:282] 0 containers: []
	W1206 08:56:00.057791   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:00.057798   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:00.058035   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:00.191639   54452 cri.go:89] found id: ""
	I1206 08:56:00.191655   54452 logs.go:282] 0 containers: []
	W1206 08:56:00.191663   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:00.191671   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:00.191685   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:00.488607   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:00.462504   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.463297   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477164   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477991   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.479845   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:00.462504   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.463297   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477164   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.477991   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:00.479845   13609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:00.488619   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:00.488632   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:00.602413   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:00.602434   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:00.637181   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:00.637200   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:00.701850   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:00.701868   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:03.215126   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:03.225397   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:03.225464   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:03.253115   54452 cri.go:89] found id: ""
	I1206 08:56:03.253128   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.253135   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:03.253143   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:03.253203   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:03.278704   54452 cri.go:89] found id: ""
	I1206 08:56:03.278717   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.278724   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:03.278730   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:03.278788   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:03.304400   54452 cri.go:89] found id: ""
	I1206 08:56:03.304414   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.304421   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:03.304427   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:03.304484   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:03.330915   54452 cri.go:89] found id: ""
	I1206 08:56:03.330927   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.330934   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:03.330939   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:03.331000   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:03.356123   54452 cri.go:89] found id: ""
	I1206 08:56:03.356136   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.356143   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:03.356149   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:03.356205   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:03.381497   54452 cri.go:89] found id: ""
	I1206 08:56:03.381511   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.381517   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:03.381523   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:03.381582   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:03.405821   54452 cri.go:89] found id: ""
	I1206 08:56:03.405834   54452 logs.go:282] 0 containers: []
	W1206 08:56:03.405841   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:03.405849   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:03.405859   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:03.462897   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:03.462918   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:03.474378   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:03.474393   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:03.559522   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:03.549699   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.550344   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.552761   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.554016   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.555406   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:03.549699   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.550344   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.552761   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.554016   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:03.555406   13725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:03.559532   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:03.559545   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:03.626698   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:03.626716   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:06.154123   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:06.164837   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:06.164908   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:06.191102   54452 cri.go:89] found id: ""
	I1206 08:56:06.191115   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.191123   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:06.191128   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:06.191194   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:06.215815   54452 cri.go:89] found id: ""
	I1206 08:56:06.215829   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.215836   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:06.215841   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:06.215901   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:06.241431   54452 cri.go:89] found id: ""
	I1206 08:56:06.241445   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.241452   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:06.241457   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:06.241520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:06.266677   54452 cri.go:89] found id: ""
	I1206 08:56:06.266692   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.266699   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:06.266705   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:06.266768   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:06.290924   54452 cri.go:89] found id: ""
	I1206 08:56:06.290940   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.290948   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:06.290953   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:06.291015   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:06.315767   54452 cri.go:89] found id: ""
	I1206 08:56:06.315781   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.315788   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:06.315794   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:06.315852   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:06.341271   54452 cri.go:89] found id: ""
	I1206 08:56:06.341284   54452 logs.go:282] 0 containers: []
	W1206 08:56:06.341291   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:06.341298   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:06.341309   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:06.369777   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:06.369793   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:06.426976   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:06.426995   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:06.438111   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:06.438126   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:06.515349   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:06.504075   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.504993   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.506819   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.507499   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.510593   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:06.504075   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.504993   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.506819   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.507499   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:06.510593   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:06.515366   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:06.515403   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:09.084957   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:09.095918   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:09.095982   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:09.122788   54452 cri.go:89] found id: ""
	I1206 08:56:09.122802   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.122816   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:09.122822   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:09.122886   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:09.150280   54452 cri.go:89] found id: ""
	I1206 08:56:09.150296   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.150303   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:09.150308   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:09.150370   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:09.175968   54452 cri.go:89] found id: ""
	I1206 08:56:09.175982   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.175989   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:09.175995   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:09.176054   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:09.205200   54452 cri.go:89] found id: ""
	I1206 08:56:09.205214   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.205221   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:09.205226   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:09.205284   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:09.229722   54452 cri.go:89] found id: ""
	I1206 08:56:09.229741   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.229758   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:09.229764   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:09.229823   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:09.253449   54452 cri.go:89] found id: ""
	I1206 08:56:09.253462   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.253469   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:09.253475   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:09.253532   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:09.278075   54452 cri.go:89] found id: ""
	I1206 08:56:09.278096   54452 logs.go:282] 0 containers: []
	W1206 08:56:09.278103   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:09.278111   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:09.278127   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:09.334207   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:09.334224   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:09.345268   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:09.345284   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:09.411030   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:09.402872   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.403332   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.404994   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.405444   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.406900   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:09.402872   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.403332   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.404994   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.405444   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:09.406900   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:09.411046   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:09.411057   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:09.477250   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:09.477268   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:12.012172   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:12.023603   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:12.023666   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:12.049522   54452 cri.go:89] found id: ""
	I1206 08:56:12.049536   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.049544   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:12.049549   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:12.049616   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:12.079322   54452 cri.go:89] found id: ""
	I1206 08:56:12.079336   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.079343   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:12.079348   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:12.079434   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:12.104615   54452 cri.go:89] found id: ""
	I1206 08:56:12.104629   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.104636   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:12.104642   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:12.104698   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:12.129522   54452 cri.go:89] found id: ""
	I1206 08:56:12.129536   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.129542   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:12.129548   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:12.129603   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:12.154617   54452 cri.go:89] found id: ""
	I1206 08:56:12.154631   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.154637   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:12.154642   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:12.154701   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:12.180772   54452 cri.go:89] found id: ""
	I1206 08:56:12.180786   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.180793   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:12.180798   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:12.180860   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:12.204559   54452 cri.go:89] found id: ""
	I1206 08:56:12.204573   54452 logs.go:282] 0 containers: []
	W1206 08:56:12.204585   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:12.204593   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:12.204605   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:12.267761   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:12.267780   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:12.295680   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:12.295696   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:12.355740   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:12.355759   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:12.367574   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:12.367589   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:12.438034   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:12.429169   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.429845   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.431592   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.432279   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.433870   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:12.429169   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.429845   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.431592   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.432279   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:12.433870   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:14.938326   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:14.948550   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:14.948610   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:14.974812   54452 cri.go:89] found id: ""
	I1206 08:56:14.974825   54452 logs.go:282] 0 containers: []
	W1206 08:56:14.974832   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:14.974843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:14.974901   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:15.033969   54452 cri.go:89] found id: ""
	I1206 08:56:15.033985   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.034002   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:15.034009   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:15.034081   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:15.061932   54452 cri.go:89] found id: ""
	I1206 08:56:15.061946   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.061954   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:15.061959   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:15.062054   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:15.092717   54452 cri.go:89] found id: ""
	I1206 08:56:15.092731   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.092738   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:15.092744   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:15.092804   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:15.119219   54452 cri.go:89] found id: ""
	I1206 08:56:15.119234   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.119242   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:15.119247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:15.119309   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:15.149464   54452 cri.go:89] found id: ""
	I1206 08:56:15.149477   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.149485   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:15.149490   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:15.149550   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:15.175614   54452 cri.go:89] found id: ""
	I1206 08:56:15.175628   54452 logs.go:282] 0 containers: []
	W1206 08:56:15.175635   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:15.175643   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:15.175653   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:15.239770   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:15.239789   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:15.267874   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:15.267891   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:15.327229   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:15.327247   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:15.338540   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:15.338557   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:15.402152   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:15.393377   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.393759   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395003   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395462   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.397184   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:15.393377   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.393759   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395003   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.395462   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:15.397184   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:17.903812   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:17.914165   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:17.914229   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:17.942343   54452 cri.go:89] found id: ""
	I1206 08:56:17.942357   54452 logs.go:282] 0 containers: []
	W1206 08:56:17.942363   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:17.942369   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:17.942427   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:17.972379   54452 cri.go:89] found id: ""
	I1206 08:56:17.972394   54452 logs.go:282] 0 containers: []
	W1206 08:56:17.972401   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:17.972406   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:17.972474   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:18.000726   54452 cri.go:89] found id: ""
	I1206 08:56:18.000740   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.000762   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:18.000768   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:18.000832   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:18.027348   54452 cri.go:89] found id: ""
	I1206 08:56:18.027406   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.027418   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:18.027431   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:18.027515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:18.055911   54452 cri.go:89] found id: ""
	I1206 08:56:18.055925   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.055933   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:18.055937   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:18.055994   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:18.085367   54452 cri.go:89] found id: ""
	I1206 08:56:18.085381   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.085392   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:18.085398   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:18.085466   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:18.110486   54452 cri.go:89] found id: ""
	I1206 08:56:18.110505   54452 logs.go:282] 0 containers: []
	W1206 08:56:18.110513   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:18.110520   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:18.110531   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:18.174849   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:18.166389   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.166788   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168371   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168921   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.170369   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:18.166389   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.166788   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168371   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.168921   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:18.170369   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:18.174859   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:18.174870   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:18.237754   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:18.237774   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:18.268012   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:18.268033   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:18.324652   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:18.324671   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:20.837649   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:20.848772   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:20.848844   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:20.875162   54452 cri.go:89] found id: ""
	I1206 08:56:20.875177   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.875184   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:20.875190   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:20.875260   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:20.900599   54452 cri.go:89] found id: ""
	I1206 08:56:20.900613   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.900620   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:20.900625   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:20.900683   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:20.928195   54452 cri.go:89] found id: ""
	I1206 08:56:20.928209   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.928216   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:20.928221   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:20.928288   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:20.952510   54452 cri.go:89] found id: ""
	I1206 08:56:20.952524   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.952532   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:20.952537   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:20.952594   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:20.976651   54452 cri.go:89] found id: ""
	I1206 08:56:20.976665   54452 logs.go:282] 0 containers: []
	W1206 08:56:20.976672   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:20.976677   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:20.976747   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:21.003279   54452 cri.go:89] found id: ""
	I1206 08:56:21.003294   54452 logs.go:282] 0 containers: []
	W1206 08:56:21.003301   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:21.003306   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:21.003372   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:21.029382   54452 cri.go:89] found id: ""
	I1206 08:56:21.029396   54452 logs.go:282] 0 containers: []
	W1206 08:56:21.029403   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:21.029411   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:21.029421   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:21.091035   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:21.082849   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.083705   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085252   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085569   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.087050   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:21.082849   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.083705   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085252   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.085569   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:21.087050   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:21.091049   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:21.091059   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:21.153084   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:21.153102   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:21.179992   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:21.180009   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:21.242302   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:21.242323   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:23.753350   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:23.764153   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:23.764212   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:23.794093   54452 cri.go:89] found id: ""
	I1206 08:56:23.794108   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.794115   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:23.794121   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:23.794192   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:23.818597   54452 cri.go:89] found id: ""
	I1206 08:56:23.818611   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.818618   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:23.818623   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:23.818681   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:23.845861   54452 cri.go:89] found id: ""
	I1206 08:56:23.845875   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.845882   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:23.845887   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:23.845951   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:23.871357   54452 cri.go:89] found id: ""
	I1206 08:56:23.871371   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.871423   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:23.871428   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:23.871486   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:23.895904   54452 cri.go:89] found id: ""
	I1206 08:56:23.895918   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.895926   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:23.895931   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:23.895998   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:23.921905   54452 cri.go:89] found id: ""
	I1206 08:56:23.921918   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.921925   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:23.921931   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:23.921988   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:23.946488   54452 cri.go:89] found id: ""
	I1206 08:56:23.946512   54452 logs.go:282] 0 containers: []
	W1206 08:56:23.946520   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:23.946529   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:23.946539   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:24.002888   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:24.002907   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:24.015146   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:24.015170   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:24.085686   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:24.074786   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.075755   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.078321   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.079336   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.080390   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:24.074786   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.075755   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.078321   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.079336   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:24.080390   14459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:24.085697   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:24.085707   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:24.149216   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:24.149233   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:26.686769   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:26.697125   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:26.697183   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:26.728496   54452 cri.go:89] found id: ""
	I1206 08:56:26.728510   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.728527   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:26.728532   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:26.728597   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:26.755101   54452 cri.go:89] found id: ""
	I1206 08:56:26.755115   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.755130   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:26.755136   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:26.755195   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:26.785198   54452 cri.go:89] found id: ""
	I1206 08:56:26.785211   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.785229   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:26.785234   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:26.785298   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:26.816431   54452 cri.go:89] found id: ""
	I1206 08:56:26.816445   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.816452   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:26.816457   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:26.816515   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:26.841875   54452 cri.go:89] found id: ""
	I1206 08:56:26.841889   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.841897   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:26.841902   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:26.841964   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:26.868358   54452 cri.go:89] found id: ""
	I1206 08:56:26.868372   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.868379   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:26.868384   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:26.868456   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:26.895528   54452 cri.go:89] found id: ""
	I1206 08:56:26.895541   54452 logs.go:282] 0 containers: []
	W1206 08:56:26.895547   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:26.895555   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:26.895564   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:26.961952   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:26.961970   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:27.006459   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:27.006475   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:27.063666   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:27.063685   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:27.074993   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:27.075011   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:27.138852   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:27.130623   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.131223   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.132971   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.133326   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.134833   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:27.130623   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.131223   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.132971   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.133326   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:27.134833   14580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:29.639504   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:29.649774   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:29.649848   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:29.679629   54452 cri.go:89] found id: ""
	I1206 08:56:29.679642   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.679650   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:29.679655   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:29.679716   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:29.704535   54452 cri.go:89] found id: ""
	I1206 08:56:29.704550   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.704557   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:29.704563   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:29.704635   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:29.737627   54452 cri.go:89] found id: ""
	I1206 08:56:29.737640   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.737647   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:29.737652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:29.737709   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:29.767083   54452 cri.go:89] found id: ""
	I1206 08:56:29.767097   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.767104   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:29.767109   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:29.767166   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:29.793665   54452 cri.go:89] found id: ""
	I1206 08:56:29.793685   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.793693   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:29.793698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:29.793761   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:29.822695   54452 cri.go:89] found id: ""
	I1206 08:56:29.822709   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.822717   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:29.822722   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:29.822781   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:29.848347   54452 cri.go:89] found id: ""
	I1206 08:56:29.848360   54452 logs.go:282] 0 containers: []
	W1206 08:56:29.848380   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:29.848389   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:29.848399   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:29.911329   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:29.911349   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:29.939981   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:29.939996   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:30.001274   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:30.001296   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:30.022683   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:30.022703   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:30.138182   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:30.128285   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.129603   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.130253   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132024   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132540   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:30.128285   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.129603   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.130253   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132024   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:30.132540   14685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:32.638423   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:32.648554   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:32.648613   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:32.672719   54452 cri.go:89] found id: ""
	I1206 08:56:32.672733   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.672741   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:32.672745   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:32.672808   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:32.697375   54452 cri.go:89] found id: ""
	I1206 08:56:32.697389   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.697396   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:32.697401   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:32.697456   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:32.730608   54452 cri.go:89] found id: ""
	I1206 08:56:32.730621   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.730628   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:32.730633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:32.730690   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:32.756886   54452 cri.go:89] found id: ""
	I1206 08:56:32.756900   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.756906   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:32.756911   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:32.756967   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:32.786416   54452 cri.go:89] found id: ""
	I1206 08:56:32.786429   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.786436   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:32.786441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:32.786499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:32.817852   54452 cri.go:89] found id: ""
	I1206 08:56:32.817866   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.817873   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:32.817878   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:32.817948   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:32.847789   54452 cri.go:89] found id: ""
	I1206 08:56:32.847803   54452 logs.go:282] 0 containers: []
	W1206 08:56:32.847810   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:32.847817   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:32.847826   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:32.913422   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:32.904590   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.905149   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907029   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907428   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.909140   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:32.904590   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.905149   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907029   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.907428   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:32.909140   14768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:32.913432   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:32.913443   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:32.979128   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:32.979147   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:33.009021   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:33.009038   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:33.066116   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:33.066134   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:35.577653   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:35.587677   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:35.587739   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:35.612385   54452 cri.go:89] found id: ""
	I1206 08:56:35.612398   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.612405   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:35.612416   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:35.612474   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:35.639348   54452 cri.go:89] found id: ""
	I1206 08:56:35.639362   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.639369   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:35.639395   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:35.639457   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:35.662406   54452 cri.go:89] found id: ""
	I1206 08:56:35.662420   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.662427   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:35.662432   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:35.662494   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:35.686450   54452 cri.go:89] found id: ""
	I1206 08:56:35.686464   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.686471   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:35.686476   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:35.686535   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:35.715902   54452 cri.go:89] found id: ""
	I1206 08:56:35.715915   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.715922   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:35.715927   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:35.715986   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:35.753483   54452 cri.go:89] found id: ""
	I1206 08:56:35.753496   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.753503   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:35.753509   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:35.753571   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:35.787475   54452 cri.go:89] found id: ""
	I1206 08:56:35.787488   54452 logs.go:282] 0 containers: []
	W1206 08:56:35.787495   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:35.787509   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:35.787520   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:35.799521   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:35.799536   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:35.865541   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:35.856956   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.857477   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859150   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859621   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.861412   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:35.856956   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.857477   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859150   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.859621   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:35.861412   14877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:35.865551   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:35.865562   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:35.928394   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:35.928412   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:35.960163   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:35.960178   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:38.518969   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:38.529441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:38.529503   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:38.556742   54452 cri.go:89] found id: ""
	I1206 08:56:38.556756   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.556764   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:38.556769   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:38.556828   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:38.585575   54452 cri.go:89] found id: ""
	I1206 08:56:38.585589   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.585596   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:38.585602   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:38.585675   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:38.610698   54452 cri.go:89] found id: ""
	I1206 08:56:38.610713   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.610721   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:38.610726   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:38.610799   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:38.635789   54452 cri.go:89] found id: ""
	I1206 08:56:38.635802   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.635809   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:38.635814   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:38.635875   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:38.664415   54452 cri.go:89] found id: ""
	I1206 08:56:38.664429   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.664436   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:38.664441   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:38.664499   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:38.692373   54452 cri.go:89] found id: ""
	I1206 08:56:38.692387   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.692394   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:38.692400   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:38.692463   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:38.717762   54452 cri.go:89] found id: ""
	I1206 08:56:38.717776   54452 logs.go:282] 0 containers: []
	W1206 08:56:38.717784   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:38.717791   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:38.717804   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:38.761801   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:38.761816   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:38.823195   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:38.823214   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:38.834338   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:38.834354   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:38.902350   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:38.894283   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895054   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895859   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897382   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897703   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:38.894283   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895054   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.895859   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897382   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:38.897703   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:38.902361   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:38.902372   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:41.468409   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:41.478754   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:41.478820   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:41.506969   54452 cri.go:89] found id: ""
	I1206 08:56:41.506982   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.506989   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:41.506997   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:41.507057   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:41.531981   54452 cri.go:89] found id: ""
	I1206 08:56:41.531995   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.532002   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:41.532007   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:41.532067   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:41.556489   54452 cri.go:89] found id: ""
	I1206 08:56:41.556503   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.556511   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:41.556516   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:41.556578   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:41.582188   54452 cri.go:89] found id: ""
	I1206 08:56:41.582202   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.582209   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:41.582224   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:41.582297   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:41.608043   54452 cri.go:89] found id: ""
	I1206 08:56:41.608065   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.608073   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:41.608078   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:41.608149   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:41.636701   54452 cri.go:89] found id: ""
	I1206 08:56:41.636714   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.636722   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:41.636728   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:41.636786   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:41.661109   54452 cri.go:89] found id: ""
	I1206 08:56:41.661123   54452 logs.go:282] 0 containers: []
	W1206 08:56:41.661131   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:41.661138   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:41.661147   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:41.718276   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:41.718293   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:41.731689   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:41.731704   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:41.813161   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:41.804518   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.805060   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.806862   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.807552   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.809239   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:41.804518   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.805060   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.806862   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.807552   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:41.809239   15091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:41.813171   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:41.813183   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:41.879169   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:41.879189   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:44.409328   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:44.419475   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:44.419534   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:44.444626   54452 cri.go:89] found id: ""
	I1206 08:56:44.444640   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.444647   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:44.444652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:44.444709   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:44.469065   54452 cri.go:89] found id: ""
	I1206 08:56:44.469078   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.469085   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:44.469090   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:44.469154   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:44.492979   54452 cri.go:89] found id: ""
	I1206 08:56:44.492993   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.493000   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:44.493006   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:44.493065   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:44.517980   54452 cri.go:89] found id: ""
	I1206 08:56:44.517994   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.518012   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:44.518018   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:44.518084   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:44.550302   54452 cri.go:89] found id: ""
	I1206 08:56:44.550315   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.550322   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:44.550338   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:44.550411   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:44.574741   54452 cri.go:89] found id: ""
	I1206 08:56:44.574754   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.574773   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:44.574779   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:44.574844   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:44.599427   54452 cri.go:89] found id: ""
	I1206 08:56:44.599440   54452 logs.go:282] 0 containers: []
	W1206 08:56:44.599447   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:44.599454   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:44.599464   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:44.655195   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:44.655213   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:44.666596   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:44.666611   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:44.743689   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:44.734701   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.735711   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737288   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737597   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.739087   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:44.734701   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.735711   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737288   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.737597   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:44.739087   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:44.743706   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:44.743716   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:44.813114   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:44.813132   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:47.340486   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:47.350443   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:47.350502   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:47.381645   54452 cri.go:89] found id: ""
	I1206 08:56:47.381659   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.381666   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:47.381671   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:47.381732   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:47.408660   54452 cri.go:89] found id: ""
	I1206 08:56:47.408674   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.408681   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:47.408686   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:47.408751   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:47.434188   54452 cri.go:89] found id: ""
	I1206 08:56:47.434201   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.434208   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:47.434213   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:47.434272   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:47.463313   54452 cri.go:89] found id: ""
	I1206 08:56:47.463334   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.463342   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:47.463347   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:47.463437   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:47.491850   54452 cri.go:89] found id: ""
	I1206 08:56:47.491864   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.491871   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:47.491876   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:47.491942   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:47.520200   54452 cri.go:89] found id: ""
	I1206 08:56:47.520214   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.520221   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:47.520226   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:47.520289   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:47.546930   54452 cri.go:89] found id: ""
	I1206 08:56:47.546943   54452 logs.go:282] 0 containers: []
	W1206 08:56:47.546950   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:47.546958   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:47.546969   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:47.607002   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:47.607020   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:47.617961   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:47.617976   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:47.681928   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:47.673776   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.674574   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676165   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676631   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.678134   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:47.673776   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.674574   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676165   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.676631   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:47.678134   15297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:47.681938   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:47.681949   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:47.749465   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:47.749483   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:50.280242   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:50.291127   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:50.291189   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:50.316285   54452 cri.go:89] found id: ""
	I1206 08:56:50.316299   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.316307   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:50.316312   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:50.316378   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:50.342947   54452 cri.go:89] found id: ""
	I1206 08:56:50.342961   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.342968   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:50.342973   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:50.343034   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:50.368308   54452 cri.go:89] found id: ""
	I1206 08:56:50.368322   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.368329   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:50.368334   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:50.368392   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:50.392557   54452 cri.go:89] found id: ""
	I1206 08:56:50.392571   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.392578   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:50.392583   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:50.392643   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:50.417455   54452 cri.go:89] found id: ""
	I1206 08:56:50.417469   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.417477   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:50.417482   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:50.417547   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:50.442791   54452 cri.go:89] found id: ""
	I1206 08:56:50.442805   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.442813   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:50.442818   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:50.442887   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:50.473290   54452 cri.go:89] found id: ""
	I1206 08:56:50.473304   54452 logs.go:282] 0 containers: []
	W1206 08:56:50.473310   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:50.473318   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:50.473329   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:50.484225   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:50.484242   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:50.551034   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:50.542777   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.543204   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.544973   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.545586   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.547123   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:50.542777   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.543204   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.544973   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.545586   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:50.547123   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:50.551048   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:50.551059   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:50.614007   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:50.614025   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:50.642494   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:50.642510   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:53.201231   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:53.211652   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:53.211712   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:53.237084   54452 cri.go:89] found id: ""
	I1206 08:56:53.237098   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.237106   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:53.237117   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:53.237179   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:53.265518   54452 cri.go:89] found id: ""
	I1206 08:56:53.265533   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.265541   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:53.265547   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:53.265619   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:53.291219   54452 cri.go:89] found id: ""
	I1206 08:56:53.291233   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.291242   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:53.291247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:53.291304   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:53.316119   54452 cri.go:89] found id: ""
	I1206 08:56:53.316135   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.316143   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:53.316148   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:53.316208   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:53.345553   54452 cri.go:89] found id: ""
	I1206 08:56:53.345566   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.345574   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:53.345579   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:53.345637   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:53.374116   54452 cri.go:89] found id: ""
	I1206 08:56:53.374130   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.374138   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:53.374144   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:53.374201   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:53.401450   54452 cri.go:89] found id: ""
	I1206 08:56:53.401463   54452 logs.go:282] 0 containers: []
	W1206 08:56:53.401470   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:53.401488   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:53.401498   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:53.464628   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:53.464645   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:53.492208   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:53.492225   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:53.548199   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:53.548216   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:53.559872   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:53.559887   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:53.624790   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:53.616289   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.617036   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.618638   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.619245   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.620839   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:53.616289   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.617036   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.618638   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.619245   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:53.620839   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:56.126662   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:56.136918   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:56.136978   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:56.165346   54452 cri.go:89] found id: ""
	I1206 08:56:56.165359   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.165376   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:56.165382   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:56.165447   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:56.194525   54452 cri.go:89] found id: ""
	I1206 08:56:56.194538   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.194545   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:56.194562   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:56.194621   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:56.220295   54452 cri.go:89] found id: ""
	I1206 08:56:56.220309   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.220316   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:56.220321   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:56.220377   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:56.244567   54452 cri.go:89] found id: ""
	I1206 08:56:56.244580   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.244587   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:56.244592   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:56.244648   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:56.267992   54452 cri.go:89] found id: ""
	I1206 08:56:56.268005   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.268012   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:56.268018   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:56.268076   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:56.295817   54452 cri.go:89] found id: ""
	I1206 08:56:56.295830   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.295837   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:56.295843   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:56.295904   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:56.319421   54452 cri.go:89] found id: ""
	I1206 08:56:56.319435   54452 logs.go:282] 0 containers: []
	W1206 08:56:56.319442   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:56.319450   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:56.319460   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:56.350423   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:56.350439   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:56.407158   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:56.407176   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:56.417732   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:56.417747   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:56.488632   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:56.480052   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.480705   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.482573   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.483242   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.484311   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:56.480052   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.480705   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.482573   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.483242   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:56.484311   15616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:56.488642   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:56.488652   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:56:59.061980   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:56:59.072278   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:56:59.072339   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:56:59.101215   54452 cri.go:89] found id: ""
	I1206 08:56:59.101228   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.101235   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:56:59.101241   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:56:59.101302   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:56:59.127327   54452 cri.go:89] found id: ""
	I1206 08:56:59.127342   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.127349   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:56:59.127355   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:56:59.127442   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:56:59.152367   54452 cri.go:89] found id: ""
	I1206 08:56:59.152381   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.152388   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:56:59.152393   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:56:59.152461   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:56:59.176595   54452 cri.go:89] found id: ""
	I1206 08:56:59.176609   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.176616   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:56:59.176622   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:56:59.176680   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:56:59.201640   54452 cri.go:89] found id: ""
	I1206 08:56:59.201654   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.201661   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:56:59.201667   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:56:59.201725   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:56:59.228000   54452 cri.go:89] found id: ""
	I1206 08:56:59.228015   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.228023   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:56:59.228028   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:56:59.228097   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:56:59.254668   54452 cri.go:89] found id: ""
	I1206 08:56:59.254681   54452 logs.go:282] 0 containers: []
	W1206 08:56:59.254688   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:56:59.254696   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:56:59.254707   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:56:59.284894   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:56:59.284910   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:56:59.342586   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:56:59.342604   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:56:59.354343   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:56:59.354368   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:56:59.422837   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:56:59.414293   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.414916   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416482   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416892   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.418605   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:56:59.414293   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.414916   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416482   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.416892   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:56:59.418605   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:56:59.422847   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:56:59.422857   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:01.987724   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:02.004462   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:02.004525   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:02.037544   54452 cri.go:89] found id: ""
	I1206 08:57:02.037558   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.037565   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:02.037571   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:02.037629   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:02.064737   54452 cri.go:89] found id: ""
	I1206 08:57:02.064750   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.064759   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:02.064765   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:02.064822   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:02.090594   54452 cri.go:89] found id: ""
	I1206 08:57:02.090607   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.090615   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:02.090620   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:02.090677   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:02.118059   54452 cri.go:89] found id: ""
	I1206 08:57:02.118073   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.118080   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:02.118086   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:02.118142   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:02.147171   54452 cri.go:89] found id: ""
	I1206 08:57:02.147184   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.147191   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:02.147197   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:02.147258   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:02.178322   54452 cri.go:89] found id: ""
	I1206 08:57:02.178336   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.178343   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:02.178349   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:02.178409   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:02.206125   54452 cri.go:89] found id: ""
	I1206 08:57:02.206140   54452 logs.go:282] 0 containers: []
	W1206 08:57:02.206148   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:02.206156   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:02.206166   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:02.268742   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:02.268760   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:02.298364   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:02.298379   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:02.360782   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:02.360799   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:02.372144   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:02.372159   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:02.440932   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:02.432342   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.433106   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435042   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435754   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.436799   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:02.432342   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.433106   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435042   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.435754   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:02.436799   15832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:04.941190   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:04.951545   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:04.951607   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:04.989383   54452 cri.go:89] found id: ""
	I1206 08:57:04.989398   54452 logs.go:282] 0 containers: []
	W1206 08:57:04.989406   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:04.989413   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:04.989480   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:05.024563   54452 cri.go:89] found id: ""
	I1206 08:57:05.024580   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.024588   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:05.024593   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:05.024654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:05.054247   54452 cri.go:89] found id: ""
	I1206 08:57:05.054260   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.054267   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:05.054272   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:05.054332   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:05.079563   54452 cri.go:89] found id: ""
	I1206 08:57:05.079582   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.079589   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:05.079594   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:05.079654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:05.104268   54452 cri.go:89] found id: ""
	I1206 08:57:05.104281   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.104288   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:05.104294   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:05.104354   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:05.133366   54452 cri.go:89] found id: ""
	I1206 08:57:05.133389   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.133399   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:05.133404   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:05.133473   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:05.157604   54452 cri.go:89] found id: ""
	I1206 08:57:05.157618   54452 logs.go:282] 0 containers: []
	W1206 08:57:05.157625   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:05.157633   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:05.157644   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:05.169011   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:05.169026   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:05.232729   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:05.223674   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.224539   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226385   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226913   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.228611   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:05.223674   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.224539   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226385   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.226913   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:05.228611   15924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:05.232739   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:05.232750   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:05.295112   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:05.295130   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:05.323164   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:05.323180   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:07.880424   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:07.890491   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:07.890546   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:07.919674   54452 cri.go:89] found id: ""
	I1206 08:57:07.919688   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.919695   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:07.919702   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:07.919765   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:07.944058   54452 cri.go:89] found id: ""
	I1206 08:57:07.944072   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.944080   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:07.944085   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:07.944143   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:07.975197   54452 cri.go:89] found id: ""
	I1206 08:57:07.975211   54452 logs.go:282] 0 containers: []
	W1206 08:57:07.975219   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:07.975223   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:07.975286   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:08.003528   54452 cri.go:89] found id: ""
	I1206 08:57:08.003551   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.003559   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:08.003565   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:08.003632   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:08.042231   54452 cri.go:89] found id: ""
	I1206 08:57:08.042244   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.042251   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:08.042264   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:08.042340   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:08.070769   54452 cri.go:89] found id: ""
	I1206 08:57:08.070783   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.070800   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:08.070806   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:08.070863   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:08.095705   54452 cri.go:89] found id: ""
	I1206 08:57:08.095722   54452 logs.go:282] 0 containers: []
	W1206 08:57:08.095729   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:08.095736   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:08.095745   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:08.152794   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:08.152812   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:08.163981   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:08.164009   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:08.231637   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:08.223305   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.223828   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225446   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225934   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.227447   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:08.223305   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.223828   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225446   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.225934   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:08.227447   16032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:08.231648   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:08.231659   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:08.294693   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:08.294710   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:10.824685   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:10.834735   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:10.834797   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:10.861282   54452 cri.go:89] found id: ""
	I1206 08:57:10.861297   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.861304   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:10.861309   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:10.861380   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:10.889560   54452 cri.go:89] found id: ""
	I1206 08:57:10.889573   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.889580   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:10.889585   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:10.889646   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:10.918582   54452 cri.go:89] found id: ""
	I1206 08:57:10.918597   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.918605   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:10.918611   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:10.918677   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:10.945055   54452 cri.go:89] found id: ""
	I1206 08:57:10.945068   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.945075   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:10.945081   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:10.945142   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:10.971779   54452 cri.go:89] found id: ""
	I1206 08:57:10.971807   54452 logs.go:282] 0 containers: []
	W1206 08:57:10.971814   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:10.971820   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:10.971883   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:11.007014   54452 cri.go:89] found id: ""
	I1206 08:57:11.007028   54452 logs.go:282] 0 containers: []
	W1206 08:57:11.007035   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:11.007041   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:11.007103   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:11.033387   54452 cri.go:89] found id: ""
	I1206 08:57:11.033415   54452 logs.go:282] 0 containers: []
	W1206 08:57:11.033422   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:11.033431   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:11.033441   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:11.103950   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:11.094735   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.095599   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097342   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097718   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.099415   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:11.094735   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.095599   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097342   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.097718   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:11.099415   16133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:11.103962   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:11.103972   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:11.168820   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:11.168839   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:11.199653   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:11.199669   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:11.258665   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:11.258682   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:13.770048   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:13.780437   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:13.780537   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:13.804490   54452 cri.go:89] found id: ""
	I1206 08:57:13.804504   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.804511   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:13.804517   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:13.804576   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:13.828142   54452 cri.go:89] found id: ""
	I1206 08:57:13.828156   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.828163   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:13.828173   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:13.828234   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:13.852993   54452 cri.go:89] found id: ""
	I1206 08:57:13.853006   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.853013   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:13.853017   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:13.853073   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:13.876970   54452 cri.go:89] found id: ""
	I1206 08:57:13.876983   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.876990   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:13.876996   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:13.877057   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:13.906173   54452 cri.go:89] found id: ""
	I1206 08:57:13.906189   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.906196   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:13.906201   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:13.906260   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:13.932656   54452 cri.go:89] found id: ""
	I1206 08:57:13.932670   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.932677   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:13.932682   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:13.932744   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:13.958494   54452 cri.go:89] found id: ""
	I1206 08:57:13.958507   54452 logs.go:282] 0 containers: []
	W1206 08:57:13.958514   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:13.958522   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:13.958533   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:13.969906   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:13.969925   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:14.055494   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:14.045404   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.046095   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.048372   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.049321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.050244   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:14.045404   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.046095   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.048372   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.049321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:14.050244   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:14.055511   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:14.055523   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:14.119159   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:14.119179   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:14.151907   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:14.151925   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:16.720554   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:16.731520   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:16.731584   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:16.757438   54452 cri.go:89] found id: ""
	I1206 08:57:16.757452   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.757458   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:16.757463   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:16.757520   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:16.782537   54452 cri.go:89] found id: ""
	I1206 08:57:16.782552   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.782559   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:16.782564   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:16.782619   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:16.811967   54452 cri.go:89] found id: ""
	I1206 08:57:16.811981   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.811988   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:16.811993   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:16.812051   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:16.840450   54452 cri.go:89] found id: ""
	I1206 08:57:16.840464   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.840471   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:16.840477   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:16.840553   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:16.865953   54452 cri.go:89] found id: ""
	I1206 08:57:16.865968   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.865975   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:16.865981   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:16.866043   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:16.890520   54452 cri.go:89] found id: ""
	I1206 08:57:16.890540   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.890547   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:16.890552   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:16.890611   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:16.915368   54452 cri.go:89] found id: ""
	I1206 08:57:16.915411   54452 logs.go:282] 0 containers: []
	W1206 08:57:16.915418   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:16.915425   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:16.915435   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:16.975773   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:16.975792   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:16.990535   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:16.990557   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:17.060425   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:17.052130   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.052751   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054271   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054603   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.056244   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:17.052130   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.052751   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054271   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.054603   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:17.056244   16350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:17.060435   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:17.060446   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:17.124040   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:17.124060   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:19.655902   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:19.666330   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:19.666398   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:19.695219   54452 cri.go:89] found id: ""
	I1206 08:57:19.695232   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.695239   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:19.695245   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:19.695309   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:19.720027   54452 cri.go:89] found id: ""
	I1206 08:57:19.720041   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.720048   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:19.720053   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:19.720112   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:19.745773   54452 cri.go:89] found id: ""
	I1206 08:57:19.745787   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.745794   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:19.745799   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:19.745858   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:19.770885   54452 cri.go:89] found id: ""
	I1206 08:57:19.770898   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.770905   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:19.770910   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:19.770970   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:19.797192   54452 cri.go:89] found id: ""
	I1206 08:57:19.797205   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.797212   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:19.797218   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:19.797278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:19.825222   54452 cri.go:89] found id: ""
	I1206 08:57:19.825236   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.825243   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:19.825248   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:19.825314   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:19.855303   54452 cri.go:89] found id: ""
	I1206 08:57:19.855317   54452 logs.go:282] 0 containers: []
	W1206 08:57:19.855324   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:19.855332   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:19.855342   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:19.912412   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:19.912430   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:19.924673   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:19.924689   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:20.010098   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:19.995577   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998398   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998837   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.003925   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.004952   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:19.995577   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998398   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:19.998837   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.003925   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:20.004952   16449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:20.010109   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:20.010121   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:20.081433   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:20.081453   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:22.615286   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:22.625653   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:22.625713   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:22.650708   54452 cri.go:89] found id: ""
	I1206 08:57:22.650721   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.650728   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:22.650734   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:22.650793   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:22.675795   54452 cri.go:89] found id: ""
	I1206 08:57:22.675809   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.675816   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:22.675821   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:22.675876   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:22.700140   54452 cri.go:89] found id: ""
	I1206 08:57:22.700153   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.700160   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:22.700165   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:22.700224   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:22.726855   54452 cri.go:89] found id: ""
	I1206 08:57:22.726869   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.726876   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:22.726882   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:22.726938   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:22.751934   54452 cri.go:89] found id: ""
	I1206 08:57:22.751947   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.751954   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:22.751960   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:22.752017   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:22.780047   54452 cri.go:89] found id: ""
	I1206 08:57:22.780061   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.780068   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:22.780074   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:22.780132   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:22.804185   54452 cri.go:89] found id: ""
	I1206 08:57:22.804199   54452 logs.go:282] 0 containers: []
	W1206 08:57:22.804206   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:22.804214   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:22.804230   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:22.814840   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:22.814855   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:22.881877   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:22.873545   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.874258   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.875884   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.876440   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.878086   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:22.873545   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.874258   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.875884   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.876440   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:22.878086   16553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:22.881887   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:22.881897   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:22.949826   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:22.949846   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:22.990802   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:22.990820   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:25.557401   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:25.567869   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:25.567931   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:25.593044   54452 cri.go:89] found id: ""
	I1206 08:57:25.593058   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.593065   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:25.593070   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:25.593131   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:25.621119   54452 cri.go:89] found id: ""
	I1206 08:57:25.621134   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.621141   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:25.621146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:25.621206   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:25.649977   54452 cri.go:89] found id: ""
	I1206 08:57:25.649991   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.649998   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:25.650003   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:25.650066   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:25.674573   54452 cri.go:89] found id: ""
	I1206 08:57:25.674586   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.674593   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:25.674598   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:25.674654   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:25.700412   54452 cri.go:89] found id: ""
	I1206 08:57:25.700425   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.700432   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:25.700438   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:25.700501   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:25.726656   54452 cri.go:89] found id: ""
	I1206 08:57:25.726670   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.726686   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:25.726691   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:25.726760   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:25.751625   54452 cri.go:89] found id: ""
	I1206 08:57:25.751639   54452 logs.go:282] 0 containers: []
	W1206 08:57:25.751646   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:25.751653   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:25.751664   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:25.812914   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:25.804895   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.805687   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807191   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807672   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.809146   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:25.804895   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.805687   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807191   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.807672   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:25.809146   16658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:25.812924   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:25.812936   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:25.875880   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:25.875898   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:25.905301   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:25.905316   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:25.964301   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:25.964320   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:28.477584   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:28.487626   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:28.487685   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:28.516024   54452 cri.go:89] found id: ""
	I1206 08:57:28.516038   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.516045   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:28.516050   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:28.516109   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:28.542151   54452 cri.go:89] found id: ""
	I1206 08:57:28.542165   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.542172   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:28.542177   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:28.542234   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:28.569963   54452 cri.go:89] found id: ""
	I1206 08:57:28.569977   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.569984   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:28.569989   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:28.570047   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:28.594336   54452 cri.go:89] found id: ""
	I1206 08:57:28.594350   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.594357   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:28.594362   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:28.594421   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:28.620834   54452 cri.go:89] found id: ""
	I1206 08:57:28.620846   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.620854   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:28.620859   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:28.620916   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:28.645672   54452 cri.go:89] found id: ""
	I1206 08:57:28.645686   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.645693   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:28.645698   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:28.645762   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:28.670982   54452 cri.go:89] found id: ""
	I1206 08:57:28.670997   54452 logs.go:282] 0 containers: []
	W1206 08:57:28.671004   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:28.671011   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:28.671022   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:28.729216   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:28.729234   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:28.741378   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:28.741394   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:28.808285   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:28.799664   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.800557   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802319   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802654   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.804202   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:28.799664   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.800557   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802319   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.802654   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:28.804202   16769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:28.808296   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:28.808308   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:28.872187   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:28.872205   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:31.410802   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:31.421507   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:31.421567   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:31.449192   54452 cri.go:89] found id: ""
	I1206 08:57:31.449206   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.449213   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:31.449219   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:31.449278   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:31.479043   54452 cri.go:89] found id: ""
	I1206 08:57:31.479057   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.479070   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:31.479075   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:31.479138   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:31.504010   54452 cri.go:89] found id: ""
	I1206 08:57:31.504024   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.504031   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:31.504036   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:31.504094   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:31.529789   54452 cri.go:89] found id: ""
	I1206 08:57:31.529807   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.529818   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:31.529824   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:31.529890   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:31.555332   54452 cri.go:89] found id: ""
	I1206 08:57:31.555346   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.555354   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:31.555359   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:31.555449   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:31.585896   54452 cri.go:89] found id: ""
	I1206 08:57:31.585909   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.585916   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:31.585922   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:31.585980   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:31.610938   54452 cri.go:89] found id: ""
	I1206 08:57:31.610950   54452 logs.go:282] 0 containers: []
	W1206 08:57:31.610958   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:31.610965   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:31.610975   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:31.667535   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:31.667553   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:31.680211   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:31.680234   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:31.750810   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:31.742704   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.743477   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745233   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745766   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.746756   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:31.742704   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.743477   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745233   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.745766   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:31.746756   16874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:31.750821   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:31.750833   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:31.813960   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:31.813983   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:34.341858   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:34.352097   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:34.352170   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:34.379126   54452 cri.go:89] found id: ""
	I1206 08:57:34.379140   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.379148   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:34.379153   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:34.379211   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:34.404136   54452 cri.go:89] found id: ""
	I1206 08:57:34.404150   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.404158   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:34.404163   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:34.404222   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:34.429318   54452 cri.go:89] found id: ""
	I1206 08:57:34.429333   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.429340   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:34.429346   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:34.429410   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:34.454607   54452 cri.go:89] found id: ""
	I1206 08:57:34.454621   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.454628   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:34.454633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:34.454689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:34.481702   54452 cri.go:89] found id: ""
	I1206 08:57:34.481715   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.481722   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:34.481727   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:34.481786   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:34.506222   54452 cri.go:89] found id: ""
	I1206 08:57:34.506236   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.506242   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:34.506247   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:34.506307   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:34.531791   54452 cri.go:89] found id: ""
	I1206 08:57:34.531804   54452 logs.go:282] 0 containers: []
	W1206 08:57:34.531811   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:34.531818   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:34.531829   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:34.542352   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:34.542368   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:34.605646   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:34.597261   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.597943   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.599605   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.600148   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.601815   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:34.597261   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.597943   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.599605   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.600148   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:34.601815   16976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:34.605655   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:34.605666   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:34.668800   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:34.668818   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:34.703806   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:34.703822   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:37.265019   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:37.275013   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:37.275073   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:37.300683   54452 cri.go:89] found id: ""
	I1206 08:57:37.300696   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.300704   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:37.300710   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:37.300768   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:37.326083   54452 cri.go:89] found id: ""
	I1206 08:57:37.326096   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.326103   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:37.326109   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:37.326169   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:37.354381   54452 cri.go:89] found id: ""
	I1206 08:57:37.354395   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.354402   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:37.354407   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:37.354467   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:37.379048   54452 cri.go:89] found id: ""
	I1206 08:57:37.379062   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.379069   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:37.379074   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:37.379132   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:37.407083   54452 cri.go:89] found id: ""
	I1206 08:57:37.407097   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.407104   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:37.407120   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:37.407179   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:37.430756   54452 cri.go:89] found id: ""
	I1206 08:57:37.430769   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.430777   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:37.430782   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:37.430839   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:37.459469   54452 cri.go:89] found id: ""
	I1206 08:57:37.459483   54452 logs.go:282] 0 containers: []
	W1206 08:57:37.459490   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:37.459498   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:37.459510   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:37.470844   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:37.470860   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:37.538783   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:37.530038   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.530744   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.532506   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.533299   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.534867   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:37.530038   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.530744   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.532506   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.533299   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:37.534867   17079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:37.538793   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:37.538804   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:37.604935   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:37.604954   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:37.637474   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:37.637491   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:40.195736   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:40.205728   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 08:57:40.205790   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 08:57:40.242821   54452 cri.go:89] found id: ""
	I1206 08:57:40.242834   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.242841   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 08:57:40.242847   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 08:57:40.242902   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 08:57:40.284606   54452 cri.go:89] found id: ""
	I1206 08:57:40.284620   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.284628   54452 logs.go:284] No container was found matching "etcd"
	I1206 08:57:40.284633   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 08:57:40.284689   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 08:57:40.317256   54452 cri.go:89] found id: ""
	I1206 08:57:40.317270   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.317277   54452 logs.go:284] No container was found matching "coredns"
	I1206 08:57:40.317282   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 08:57:40.317339   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 08:57:40.341890   54452 cri.go:89] found id: ""
	I1206 08:57:40.341904   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.341911   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 08:57:40.341916   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 08:57:40.341971   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 08:57:40.365889   54452 cri.go:89] found id: ""
	I1206 08:57:40.365902   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.365909   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 08:57:40.365915   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 08:57:40.365970   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 08:57:40.390366   54452 cri.go:89] found id: ""
	I1206 08:57:40.390379   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.390386   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 08:57:40.390393   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 08:57:40.390451   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 08:57:40.414154   54452 cri.go:89] found id: ""
	I1206 08:57:40.414168   54452 logs.go:282] 0 containers: []
	W1206 08:57:40.414174   54452 logs.go:284] No container was found matching "kindnet"
	I1206 08:57:40.414182   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 08:57:40.414192   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 08:57:40.425672   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 08:57:40.425688   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 08:57:40.491793   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 08:57:40.479914   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.480484   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485346   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485909   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.487745   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 08:57:40.479914   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.480484   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485346   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.485909   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 08:57:40.487745   17178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 08:57:40.491804   54452 logs.go:123] Gathering logs for containerd ...
	I1206 08:57:40.491815   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 08:57:40.554734   54452 logs.go:123] Gathering logs for container status ...
	I1206 08:57:40.554754   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 08:57:40.585496   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 08:57:40.585511   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 08:57:43.142927   54452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:57:43.152875   54452 kubeadm.go:602] duration metric: took 4m4.203206664s to restartPrimaryControlPlane
	W1206 08:57:43.152943   54452 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 08:57:43.153014   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 08:57:43.558005   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:57:43.571431   54452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:57:43.579298   54452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 08:57:43.579354   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:57:43.587284   54452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 08:57:43.587293   54452 kubeadm.go:158] found existing configuration files:
	
	I1206 08:57:43.587347   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 08:57:43.595209   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 08:57:43.595263   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 08:57:43.602677   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 08:57:43.610821   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 08:57:43.610884   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:57:43.618219   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 08:57:43.625867   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 08:57:43.625922   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:57:43.633373   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 08:57:43.640818   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 08:57:43.640880   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:57:43.648275   54452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 08:57:43.690498   54452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 08:57:43.690790   54452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 08:57:43.763599   54452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 08:57:43.763663   54452 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 08:57:43.763697   54452 kubeadm.go:319] OS: Linux
	I1206 08:57:43.763740   54452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 08:57:43.763787   54452 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 08:57:43.763833   54452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 08:57:43.763880   54452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 08:57:43.763928   54452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 08:57:43.763975   54452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 08:57:43.764019   54452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 08:57:43.764066   54452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 08:57:43.764112   54452 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 08:57:43.838707   54452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 08:57:43.838810   54452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 08:57:43.838899   54452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 08:57:43.843797   54452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 08:57:43.849166   54452 out.go:252]   - Generating certificates and keys ...
	I1206 08:57:43.849248   54452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 08:57:43.849312   54452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 08:57:43.849386   54452 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 08:57:43.849451   54452 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 08:57:43.849520   54452 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 08:57:43.849572   54452 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 08:57:43.849633   54452 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 08:57:43.849693   54452 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 08:57:43.849766   54452 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 08:57:43.849838   54452 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 08:57:43.849874   54452 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 08:57:43.849928   54452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 08:57:44.005203   54452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 08:57:44.248156   54452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 08:57:44.506601   54452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 08:57:44.747606   54452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 08:57:44.875144   54452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 08:57:44.875922   54452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 08:57:44.878561   54452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 08:57:44.881876   54452 out.go:252]   - Booting up control plane ...
	I1206 08:57:44.881976   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 08:57:44.882052   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 08:57:44.882117   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 08:57:44.902770   54452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 08:57:44.902884   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 08:57:44.910887   54452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 08:57:44.915557   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 08:57:44.915618   54452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 08:57:45.072565   54452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 08:57:45.072679   54452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:01:45.073201   54452 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00139193s
	I1206 09:01:45.073230   54452 kubeadm.go:319] 
	I1206 09:01:45.073292   54452 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:01:45.073325   54452 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:01:45.073460   54452 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:01:45.073475   54452 kubeadm.go:319] 
	I1206 09:01:45.073605   54452 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:01:45.073641   54452 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:01:45.073671   54452 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:01:45.073674   54452 kubeadm.go:319] 
	I1206 09:01:45.079541   54452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:01:45.080019   54452 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:01:45.080137   54452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:01:45.080372   54452 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 09:01:45.080377   54452 kubeadm.go:319] 
	W1206 09:01:45.080611   54452 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00139193s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:01:45.080716   54452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:01:45.081059   54452 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:01:45.527784   54452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:01:45.541714   54452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:01:45.541768   54452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:01:45.549724   54452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:01:45.549735   54452 kubeadm.go:158] found existing configuration files:
	
	I1206 09:01:45.549787   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 09:01:45.557657   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:01:45.557710   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:01:45.565116   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 09:01:45.572963   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:01:45.573017   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:01:45.580604   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 09:01:45.588212   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:01:45.588267   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:01:45.595779   54452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 09:01:45.604082   54452 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:01:45.604137   54452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:01:45.612084   54452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:01:45.650374   54452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:01:45.650428   54452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:01:45.720642   54452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:01:45.720706   54452 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:01:45.720740   54452 kubeadm.go:319] OS: Linux
	I1206 09:01:45.720783   54452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:01:45.720831   54452 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:01:45.720876   54452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:01:45.720923   54452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:01:45.720970   54452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:01:45.721017   54452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:01:45.721061   54452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:01:45.721107   54452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:01:45.721153   54452 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:01:45.786361   54452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:01:45.786476   54452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:01:45.786571   54452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:01:45.791901   54452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:01:45.795433   54452 out.go:252]   - Generating certificates and keys ...
	I1206 09:01:45.795514   54452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:01:45.795578   54452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:01:45.795654   54452 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 09:01:45.795714   54452 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 09:01:45.795783   54452 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 09:01:45.795835   54452 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 09:01:45.795898   54452 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 09:01:45.795958   54452 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 09:01:45.796032   54452 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 09:01:45.796104   54452 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 09:01:45.796185   54452 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 09:01:45.796240   54452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:01:45.935718   54452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:01:46.055895   54452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:01:46.294260   54452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:01:46.619812   54452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:01:46.778456   54452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:01:46.779211   54452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:01:46.782067   54452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:01:46.785434   54452 out.go:252]   - Booting up control plane ...
	I1206 09:01:46.785536   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:01:46.785617   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:01:46.785688   54452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:01:46.805726   54452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:01:46.805831   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:01:46.814430   54452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:01:46.816546   54452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:01:46.816591   54452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:01:46.952811   54452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:01:46.952924   54452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:05:46.951725   54452 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00022284s
	I1206 09:05:46.951748   54452 kubeadm.go:319] 
	I1206 09:05:46.951804   54452 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:05:46.951836   54452 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:05:46.951939   54452 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:05:46.951944   54452 kubeadm.go:319] 
	I1206 09:05:46.952047   54452 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:05:46.952078   54452 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:05:46.952108   54452 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:05:46.952111   54452 kubeadm.go:319] 
	I1206 09:05:46.956655   54452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:05:46.957065   54452 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:05:46.957172   54452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:05:46.957405   54452 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 09:05:46.957409   54452 kubeadm.go:319] 
	I1206 09:05:46.957479   54452 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:05:46.957537   54452 kubeadm.go:403] duration metric: took 12m8.043807841s to StartCluster
	I1206 09:05:46.957567   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:05:46.957632   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:05:47.005263   54452 cri.go:89] found id: ""
	I1206 09:05:47.005276   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.005284   54452 logs.go:284] No container was found matching "kube-apiserver"
	I1206 09:05:47.005289   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:05:47.005348   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:05:47.039824   54452 cri.go:89] found id: ""
	I1206 09:05:47.039837   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.039844   54452 logs.go:284] No container was found matching "etcd"
	I1206 09:05:47.039849   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:05:47.039907   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:05:47.069199   54452 cri.go:89] found id: ""
	I1206 09:05:47.069215   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.069222   54452 logs.go:284] No container was found matching "coredns"
	I1206 09:05:47.069228   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:05:47.069290   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:05:47.094120   54452 cri.go:89] found id: ""
	I1206 09:05:47.094134   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.094141   54452 logs.go:284] No container was found matching "kube-scheduler"
	I1206 09:05:47.094146   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:05:47.094204   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:05:47.117873   54452 cri.go:89] found id: ""
	I1206 09:05:47.117887   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.117895   54452 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:05:47.117900   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:05:47.117957   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:05:47.141782   54452 cri.go:89] found id: ""
	I1206 09:05:47.141796   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.141803   54452 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 09:05:47.141809   54452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:05:47.141869   54452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:05:47.167265   54452 cri.go:89] found id: ""
	I1206 09:05:47.167280   54452 logs.go:282] 0 containers: []
	W1206 09:05:47.167287   54452 logs.go:284] No container was found matching "kindnet"
	I1206 09:05:47.167295   54452 logs.go:123] Gathering logs for kubelet ...
	I1206 09:05:47.167314   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:05:47.224071   54452 logs.go:123] Gathering logs for dmesg ...
	I1206 09:05:47.224090   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:05:47.235798   54452 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:05:47.235814   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:05:47.303156   54452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:05:47.295299   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.295956   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297451   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297881   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.299336   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 09:05:47.295299   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.295956   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297451   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.297881   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:05:47.299336   20966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:05:47.303181   54452 logs.go:123] Gathering logs for containerd ...
	I1206 09:05:47.303191   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:05:47.366843   54452 logs.go:123] Gathering logs for container status ...
	I1206 09:05:47.366863   54452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 09:05:47.396270   54452 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 09:05:47.396302   54452 out.go:285] * 
	W1206 09:05:47.396359   54452 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:05:47.396374   54452 out.go:285] * 
	W1206 09:05:47.398505   54452 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:05:47.405628   54452 out.go:203] 
	W1206 09:05:47.408588   54452 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00022284s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:05:47.408634   54452 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 09:05:47.408679   54452 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 09:05:47.411976   54452 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948296356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948312964Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948379313Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948412085Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948441403Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948462491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948482866Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948510698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948529111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948562713Z" level=info msg="Connect containerd service"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.948903673Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.949608593Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967402561Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967484402Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967550135Z" level=info msg="Start subscribing containerd event"
	Dec 06 08:53:36 functional-090986 containerd[9728]: time="2025-12-06T08:53:36.967692902Z" level=info msg="Start recovering state"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019042107Z" level=info msg="Start event monitor"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019110196Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019122955Z" level=info msg="Start streaming server"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019132310Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019140786Z" level=info msg="runtime interface starting up..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019147531Z" level=info msg="starting plugins..."
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.019160085Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 08:53:37 functional-090986 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 08:53:37 functional-090986 containerd[9728]: time="2025-12-06T08:53:37.020711198Z" level=info msg="containerd successfully booted in 0.094795s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:07:25.796703   22331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:25.797529   22331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:25.799306   22331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:25.800215   22331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:25.801882   22331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 09:07:25 up 49 min,  0 user,  load average: 0.47, 0.26, 0.37
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 09:07:22 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:22 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 448.
	Dec 06 09:07:22 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:22 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:23 functional-090986 kubelet[22215]: E1206 09:07:23.021145   22215 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:23 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:23 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:23 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 449.
	Dec 06 09:07:23 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:23 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:23 functional-090986 kubelet[22221]: E1206 09:07:23.766807   22221 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:23 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:23 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:24 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 450.
	Dec 06 09:07:24 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:24 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:24 functional-090986 kubelet[22227]: E1206 09:07:24.530013   22227 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:24 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:24 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:25 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 451.
	Dec 06 09:07:25 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:25 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:25 functional-090986 kubelet[22248]: E1206 09:07:25.289624   22248 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:25 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:25 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (366.733434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 09:06:05.893359    4292 retry.go:31] will retry after 3.135399399s: Temporary Error: Get "http://10.98.181.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 09:06:19.029759    4292 retry.go:31] will retry after 5.444299522s: Temporary Error: Get "http://10.98.181.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 09:06:34.475487    4292 retry.go:31] will retry after 4.426862948s: Temporary Error: Get "http://10.98.181.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1206 09:06:36.062176    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 09:06:48.903355    4292 retry.go:31] will retry after 5.297225743s: Temporary Error: Get "http://10.98.181.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 09:07:04.201809    4292 retry.go:31] will retry after 9.788069499s: Temporary Error: Get "http://10.98.181.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1206 09:07:57.331822    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1206 09:09:39.139875    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (330.261981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (316.236216ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-090986 image load --daemon kicbase/echo-server:functional-090986 --alsologtostderr                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image ls                                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image save kicbase/echo-server:functional-090986 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image rm kicbase/echo-server:functional-090986 --alsologtostderr                                                                              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image ls                                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image ls                                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image save --daemon kicbase/echo-server:functional-090986 --alsologtostderr                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh            │ functional-090986 ssh sudo cat /etc/test/nested/copy/4292/hosts                                                                                                 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh            │ functional-090986 ssh sudo cat /etc/ssl/certs/4292.pem                                                                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh            │ functional-090986 ssh sudo cat /usr/share/ca-certificates/4292.pem                                                                                              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh            │ functional-090986 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh            │ functional-090986 ssh sudo cat /etc/ssl/certs/42922.pem                                                                                                         │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh            │ functional-090986 ssh sudo cat /usr/share/ca-certificates/42922.pem                                                                                             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh            │ functional-090986 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image ls --format short --alsologtostderr                                                                                                     │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image ls --format yaml --alsologtostderr                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh            │ functional-090986 ssh pgrep buildkitd                                                                                                                           │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ image          │ functional-090986 image build -t localhost/my-image:functional-090986 testdata/build --alsologtostderr                                                          │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image ls                                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image ls --format json --alsologtostderr                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image          │ functional-090986 image ls --format table --alsologtostderr                                                                                                     │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ update-context │ functional-090986 update-context --alsologtostderr -v=2                                                                                                         │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ update-context │ functional-090986 update-context --alsologtostderr -v=2                                                                                                         │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ update-context │ functional-090986 update-context --alsologtostderr -v=2                                                                                                         │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:07:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:07:41.725234   71654 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:07:41.725495   71654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.725523   71654 out.go:374] Setting ErrFile to fd 2...
	I1206 09:07:41.725565   71654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.726047   71654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:07:41.726648   71654 out.go:368] Setting JSON to false
	I1206 09:07:41.727841   71654 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3013,"bootTime":1765009049,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:07:41.727988   71654 start.go:143] virtualization:  
	I1206 09:07:41.731273   71654 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:07:41.735271   71654 notify.go:221] Checking for updates...
	I1206 09:07:41.736168   71654 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:07:41.739723   71654 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:07:41.742669   71654 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:07:41.745442   71654 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:07:41.748505   71654 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:07:41.751454   71654 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:07:41.754797   71654 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:41.755467   71654 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:07:41.792402   71654 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:07:41.792535   71654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:41.854255   71654 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:41.844749592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:07:41.854367   71654 docker.go:319] overlay module found
	I1206 09:07:41.857426   71654 out.go:179] * Using the docker driver based on existing profile
	I1206 09:07:41.860347   71654 start.go:309] selected driver: docker
	I1206 09:07:41.860369   71654 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:41.860483   71654 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:07:41.860587   71654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:41.915317   71654 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:41.905871138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:07:41.915806   71654 cni.go:84] Creating CNI manager for ""
	I1206 09:07:41.915877   71654 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:07:41.915925   71654 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:41.918972   71654 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.883729576Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.884226278Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:47 functional-090986 containerd[9728]: time="2025-12-06T09:07:47.966432749Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\""
	Dec 06 09:07:47 functional-090986 containerd[9728]: time="2025-12-06T09:07:47.968929331Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:47 functional-090986 containerd[9728]: time="2025-12-06T09:07:47.971492768Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 09:07:47 functional-090986 containerd[9728]: time="2025-12-06T09:07:47.979899158Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\" returns successfully"
	Dec 06 09:07:48 functional-090986 containerd[9728]: time="2025-12-06T09:07:48.215796605Z" level=info msg="No images store for sha256:741daa0ab7c37bd505db8952c22c16f15a5ff83b4ec00b69e71b0c54ad2fa033"
	Dec 06 09:07:48 functional-090986 containerd[9728]: time="2025-12-06T09:07:48.217998718Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:48 functional-090986 containerd[9728]: time="2025-12-06T09:07:48.226243974Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:48 functional-090986 containerd[9728]: time="2025-12-06T09:07:48.226597069Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.022301593Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\""
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.024706112Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.026879483Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.036066852Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\" returns successfully"
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.721562238Z" level=info msg="No images store for sha256:dc43d44d2a9d0a9bf7bc9e4520f7949df149dd6139b45c6ff9e1282e3852255f"
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.723764474Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.731153254Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.731859714Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:57 functional-090986 containerd[9728]: time="2025-12-06T09:07:57.706536087Z" level=info msg="connecting to shim no204n25ky6kmkkbrgnl31gu2" address="unix:///run/containerd/s/36e2dc9232e4f3c8d638d58fe1cdba3c7ae21a0e7e4c45391c7e755f5be6c9bb" namespace=k8s.io protocol=ttrpc version=3
	Dec 06 09:07:57 functional-090986 containerd[9728]: time="2025-12-06T09:07:57.776299352Z" level=info msg="shim disconnected" id=no204n25ky6kmkkbrgnl31gu2 namespace=k8s.io
	Dec 06 09:07:57 functional-090986 containerd[9728]: time="2025-12-06T09:07:57.777081833Z" level=info msg="cleaning up after shim disconnected" id=no204n25ky6kmkkbrgnl31gu2 namespace=k8s.io
	Dec 06 09:07:57 functional-090986 containerd[9728]: time="2025-12-06T09:07:57.777114547Z" level=info msg="cleaning up dead shim" id=no204n25ky6kmkkbrgnl31gu2 namespace=k8s.io
	Dec 06 09:07:58 functional-090986 containerd[9728]: time="2025-12-06T09:07:58.078673860Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-090986\""
	Dec 06 09:07:58 functional-090986 containerd[9728]: time="2025-12-06T09:07:58.087344202Z" level=info msg="ImageCreate event name:\"sha256:3c36381cae33ea3bc2c2e6a6b4714894d0902b3179a0f06e465a22add810716c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:58 functional-090986 containerd[9728]: time="2025-12-06T09:07:58.088208456Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:09:57.761541   25177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:09:57.762236   25177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:09:57.763873   25177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:09:57.764436   25177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:09:57.766240   25177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 09:09:57 up 52 min,  0 user,  load average: 0.34, 0.45, 0.43
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 09:09:54 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:09:55 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 06 09:09:55 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:55 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:55 functional-090986 kubelet[25044]: E1206 09:09:55.267134   25044 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:09:55 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:09:55 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:09:55 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 06 09:09:55 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:55 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:56 functional-090986 kubelet[25050]: E1206 09:09:56.020194   25050 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:09:56 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:09:56 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:09:56 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 06 09:09:56 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:56 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:56 functional-090986 kubelet[25068]: E1206 09:09:56.769092   25068 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:09:56 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:09:56 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:09:57 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 654.
	Dec 06 09:09:57 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:57 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:09:57 functional-090986 kubelet[25112]: E1206 09:09:57.529945   25112 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:09:57 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:09:57 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (319.422274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-090986 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-090986 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (59.684053ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-090986 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	        "Created": "2025-12-06T08:38:54.137142754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T08:38:54.209992266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
	        "LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
	        "Name": "/functional-090986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-090986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-090986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
	                "LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-090986",
	                "Source": "/var/lib/docker/volumes/functional-090986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-090986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-090986",
	                "name.minikube.sigs.k8s.io": "functional-090986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
	            "SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-090986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:de:4e:f1:7a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
	                    "EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-090986",
	                        "0202a22115df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 2 (335.498106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount2 --alsologtostderr -v=1                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ mount     │ -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount3 --alsologtostderr -v=1                            │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh       │ functional-090986 ssh findmnt -T /mount1                                                                                                                        │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh findmnt -T /mount2                                                                                                                        │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh findmnt -T /mount3                                                                                                                        │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ mount     │ -p functional-090986 --kill=true                                                                                                                                │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ start     │ -p functional-090986 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ start     │ -p functional-090986 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ start     │ -p functional-090986 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                       │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-090986 --alsologtostderr -v=1                                                                                                  │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ ssh       │ functional-090986 ssh sudo systemctl is-active docker                                                                                                           │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ ssh       │ functional-090986 ssh sudo systemctl is-active crio                                                                                                             │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │                     │
	│ image     │ functional-090986 image load --daemon kicbase/echo-server:functional-090986 --alsologtostderr                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image ls                                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image load --daemon kicbase/echo-server:functional-090986 --alsologtostderr                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image ls                                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image load --daemon kicbase/echo-server:functional-090986 --alsologtostderr                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image ls                                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image save kicbase/echo-server:functional-090986 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image rm kicbase/echo-server:functional-090986 --alsologtostderr                                                                              │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image ls                                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image ls                                                                                                                                      │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	│ image     │ functional-090986 image save --daemon kicbase/echo-server:functional-090986 --alsologtostderr                                                                   │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 09:07 UTC │ 06 Dec 25 09:07 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:07:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:07:41.725234   71654 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:07:41.725495   71654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.725523   71654 out.go:374] Setting ErrFile to fd 2...
	I1206 09:07:41.725565   71654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.726047   71654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:07:41.726648   71654 out.go:368] Setting JSON to false
	I1206 09:07:41.727841   71654 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3013,"bootTime":1765009049,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:07:41.727988   71654 start.go:143] virtualization:  
	I1206 09:07:41.731273   71654 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:07:41.735271   71654 notify.go:221] Checking for updates...
	I1206 09:07:41.736168   71654 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:07:41.739723   71654 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:07:41.742669   71654 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:07:41.745442   71654 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:07:41.748505   71654 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:07:41.751454   71654 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:07:41.754797   71654 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:41.755467   71654 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:07:41.792402   71654 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:07:41.792535   71654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:41.854255   71654 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:41.844749592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:07:41.854367   71654 docker.go:319] overlay module found
	I1206 09:07:41.857426   71654 out.go:179] * Using the docker driver based on existing profile
	I1206 09:07:41.860347   71654 start.go:309] selected driver: docker
	I1206 09:07:41.860369   71654 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:41.860483   71654 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:07:41.860587   71654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:41.915317   71654 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:41.905871138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:07:41.915806   71654 cni.go:84] Creating CNI manager for ""
	I1206 09:07:41.915877   71654 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:07:41.915925   71654 start.go:353] cluster config:
	{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:41.918972   71654 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 09:07:45 functional-090986 containerd[9728]: time="2025-12-06T09:07:45.793028248Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.618216169Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\""
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.620786023Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.622977207Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.632042278Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\" returns successfully"
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.866767043Z" level=info msg="No images store for sha256:741daa0ab7c37bd505db8952c22c16f15a5ff83b4ec00b69e71b0c54ad2fa033"
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.868852593Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.883729576Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:46 functional-090986 containerd[9728]: time="2025-12-06T09:07:46.884226278Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:47 functional-090986 containerd[9728]: time="2025-12-06T09:07:47.966432749Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\""
	Dec 06 09:07:47 functional-090986 containerd[9728]: time="2025-12-06T09:07:47.968929331Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:47 functional-090986 containerd[9728]: time="2025-12-06T09:07:47.971492768Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 09:07:47 functional-090986 containerd[9728]: time="2025-12-06T09:07:47.979899158Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\" returns successfully"
	Dec 06 09:07:48 functional-090986 containerd[9728]: time="2025-12-06T09:07:48.215796605Z" level=info msg="No images store for sha256:741daa0ab7c37bd505db8952c22c16f15a5ff83b4ec00b69e71b0c54ad2fa033"
	Dec 06 09:07:48 functional-090986 containerd[9728]: time="2025-12-06T09:07:48.217998718Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:48 functional-090986 containerd[9728]: time="2025-12-06T09:07:48.226243974Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:48 functional-090986 containerd[9728]: time="2025-12-06T09:07:48.226597069Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.022301593Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\""
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.024706112Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.026879483Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.036066852Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-090986\" returns successfully"
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.721562238Z" level=info msg="No images store for sha256:dc43d44d2a9d0a9bf7bc9e4520f7949df149dd6139b45c6ff9e1282e3852255f"
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.723764474Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-090986\""
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.731153254Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:07:49 functional-090986 containerd[9728]: time="2025-12-06T09:07:49.731859714Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-090986\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 09:07:51.396981   23741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:51.398095   23741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:51.399815   23741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:51.400110   23741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 09:07:51.401590   23741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 09:07:51 up 50 min,  0 user,  load average: 1.62, 0.56, 0.46
	Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 09:07:47 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:48 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 06 09:07:48 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:48 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:48 functional-090986 kubelet[23482]: E1206 09:07:48.536773   23482 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:48 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:48 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:49 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 06 09:07:49 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:49 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:49 functional-090986 kubelet[23544]: E1206 09:07:49.282314   23544 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:49 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:49 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:49 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 06 09:07:49 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:49 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:50 functional-090986 kubelet[23602]: E1206 09:07:50.041728   23602 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:50 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:50 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:07:50 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 06 09:07:50 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:50 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:07:50 functional-090986 kubelet[23653]: E1206 09:07:50.783334   23653 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:07:50 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:07:50 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 2 (337.177388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1206 09:05:55.318491   67465 out.go:360] Setting OutFile to fd 1 ...
I1206 09:05:55.321996   67465 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:05:55.323443   67465 out.go:374] Setting ErrFile to fd 2...
I1206 09:05:55.323502   67465 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:05:55.323953   67465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 09:05:55.324372   67465 mustload.go:66] Loading cluster: functional-090986
I1206 09:05:55.325164   67465 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:05:55.327988   67465 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 09:05:55.360966   67465 host.go:66] Checking if "functional-090986" exists ...
I1206 09:05:55.361285   67465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 09:05:55.459367   67465 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:05:55.445345761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 09:05:55.459531   67465 api_server.go:166] Checking apiserver status ...
I1206 09:05:55.459583   67465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 09:05:55.459633   67465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 09:05:55.498444   67465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
W1206 09:05:55.623688   67465 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 09:05:55.627113   67465 out.go:179] * The control-plane node functional-090986 apiserver is not running: (state=Stopped)
I1206 09:05:55.630307   67465 out.go:179]   To start a cluster, run: "minikube start -p functional-090986"

                                                
                                                
stdout: * The control-plane node functional-090986 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-090986"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 67464: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-090986 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-090986 apply -f testdata/testsvc.yaml: exit status 1 (138.603476ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-090986 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (88.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.98.181.96": Temporary Error: Get "http://10.98.181.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-090986 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-090986 get svc nginx-svc: exit status 1 (67.540584ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-090986 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (88.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-090986 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-090986 create deployment hello-node --image kicbase/echo-server: exit status 1 (61.139265ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-090986 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 service list: exit status 103 (269.386305ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-090986 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-090986"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-090986 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-090986 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-090986\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 service list -o json: exit status 103 (270.877863ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-090986 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-090986"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-090986 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 service --namespace=default --https --url hello-node: exit status 103 (288.390444ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-090986 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-090986"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-090986 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 service hello-node --url --format={{.IP}}: exit status 103 (271.263614ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-090986 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-090986"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-090986 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-090986 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-090986\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 service hello-node --url: exit status 103 (264.037547ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-090986 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-090986"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-090986 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-090986 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-090986"
functional_test.go:1579: failed to parse "* The control-plane node functional-090986 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-090986\"": parse "* The control-plane node functional-090986 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-090986\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765012051876100393" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765012051876100393" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765012051876100393" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001/test-1765012051876100393
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (392.566552ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:07:32.268953    4292 retry.go:31] will retry after 361.138891ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 09:07 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 09:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 09:07 test-1765012051876100393
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh cat /mount-9p/test-1765012051876100393
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-090986 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-090986 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (77.214213ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-090986 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (270.714255ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=37301)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  6 09:07 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  6 09:07 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  6 09:07 test-1765012051876100393
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-090986 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:37301
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001:/mount-9p --alsologtostderr -v=1] stderr:
I1206 09:07:31.959678   69686 out.go:360] Setting OutFile to fd 1 ...
I1206 09:07:31.959878   69686 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:31.959889   69686 out.go:374] Setting ErrFile to fd 2...
I1206 09:07:31.959895   69686 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:31.960158   69686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 09:07:31.960449   69686 mustload.go:66] Loading cluster: functional-090986
I1206 09:07:31.960848   69686 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:31.961362   69686 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 09:07:31.995774   69686 host.go:66] Checking if "functional-090986" exists ...
I1206 09:07:31.996238   69686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 09:07:32.123039   69686 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:32.113053677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 09:07:32.123197   69686 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 09:07:32.152566   69686 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001 into VM as /mount-9p ...
I1206 09:07:32.157377   69686 out.go:179]   - Mount type:   9p
I1206 09:07:32.160187   69686 out.go:179]   - User ID:      docker
I1206 09:07:32.163070   69686 out.go:179]   - Group ID:     docker
I1206 09:07:32.165908   69686 out.go:179]   - Version:      9p2000.L
I1206 09:07:32.169328   69686 out.go:179]   - Message Size: 262144
I1206 09:07:32.172164   69686 out.go:179]   - Options:      map[]
I1206 09:07:32.175205   69686 out.go:179]   - Bind Address: 192.168.49.1:37301
I1206 09:07:32.178111   69686 out.go:179] * Userspace file server: 
I1206 09:07:32.178358   69686 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 09:07:32.178464   69686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 09:07:32.201130   69686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 09:07:32.310257   69686 mount.go:180] unmount for /mount-9p ran successfully
I1206 09:07:32.310297   69686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1206 09:07:32.318700   69686 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=37301,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1206 09:07:32.329394   69686 main.go:127] stdlog: ufs.go:141 connected
I1206 09:07:32.329554   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tversion tag 65535 msize 262144 version '9P2000.L'
I1206 09:07:32.329597   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rversion tag 65535 msize 262144 version '9P2000'
I1206 09:07:32.329879   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1206 09:07:32.329946   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rattach tag 0 aqid (4431b f2ea97a1 'd')
I1206 09:07:32.330789   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 0
I1206 09:07:32.330849   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431b f2ea97a1 'd') m d775 at 0 mt 1765012051 l 4096 t 0 d 0 ext )
I1206 09:07:32.335577   69686 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/.mount-process: {Name:mk4d653a380d58a3c9f3e8ffab3cde0b93da5379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:07:32.335773   69686 mount.go:105] mount successful: ""
I1206 09:07:32.339255   69686 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3486553265/001 to /mount-9p
I1206 09:07:32.342265   69686 out.go:203] 
I1206 09:07:32.345189   69686 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1206 09:07:33.191528   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 0
I1206 09:07:33.191641   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431b f2ea97a1 'd') m d775 at 0 mt 1765012051 l 4096 t 0 d 0 ext )
I1206 09:07:33.192000   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 1 
I1206 09:07:33.192034   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 
I1206 09:07:33.192180   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Topen tag 0 fid 1 mode 0
I1206 09:07:33.192228   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Ropen tag 0 qid (4431b f2ea97a1 'd') iounit 0
I1206 09:07:33.192351   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 0
I1206 09:07:33.192385   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431b f2ea97a1 'd') m d775 at 0 mt 1765012051 l 4096 t 0 d 0 ext )
I1206 09:07:33.192557   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 1 offset 0 count 262120
I1206 09:07:33.192679   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 258
I1206 09:07:33.192816   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 1 offset 258 count 261862
I1206 09:07:33.192845   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 0
I1206 09:07:33.192973   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 1 offset 258 count 262120
I1206 09:07:33.193000   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 0
I1206 09:07:33.193128   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 09:07:33.193170   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 (4431c f2ea97a1 '') 
I1206 09:07:33.193314   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.193351   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431c f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.193476   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.193505   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431c f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.193642   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 2
I1206 09:07:33.193672   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.193803   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 2 0:'test-1765012051876100393' 
I1206 09:07:33.193833   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 (4431e f2ea97a1 '') 
I1206 09:07:33.193960   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.193990   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('test-1765012051876100393' 'jenkins' 'jenkins' '' q (4431e f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.194112   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.194146   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('test-1765012051876100393' 'jenkins' 'jenkins' '' q (4431e f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.194277   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 2
I1206 09:07:33.194313   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.194441   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 09:07:33.194478   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 (4431d f2ea97a1 '') 
I1206 09:07:33.194586   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.194618   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431d f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.194747   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.194776   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431d f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.194907   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 2
I1206 09:07:33.194928   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.195046   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 1 offset 258 count 262120
I1206 09:07:33.195072   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 0
I1206 09:07:33.195208   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 1
I1206 09:07:33.195233   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.467631   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 1 0:'test-1765012051876100393' 
I1206 09:07:33.467700   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 (4431e f2ea97a1 '') 
I1206 09:07:33.467869   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 1
I1206 09:07:33.467922   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('test-1765012051876100393' 'jenkins' 'jenkins' '' q (4431e f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.468097   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 1 newfid 2 
I1206 09:07:33.468160   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 
I1206 09:07:33.468286   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Topen tag 0 fid 2 mode 0
I1206 09:07:33.468350   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Ropen tag 0 qid (4431e f2ea97a1 '') iounit 0
I1206 09:07:33.468486   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 1
I1206 09:07:33.468544   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('test-1765012051876100393' 'jenkins' 'jenkins' '' q (4431e f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.468712   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 2 offset 0 count 262120
I1206 09:07:33.468764   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 24
I1206 09:07:33.468913   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 2 offset 24 count 262120
I1206 09:07:33.468950   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 0
I1206 09:07:33.469096   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 2 offset 24 count 262120
I1206 09:07:33.469126   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 0
I1206 09:07:33.469645   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 2
I1206 09:07:33.469680   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.470101   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 1
I1206 09:07:33.470138   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.822088   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 0
I1206 09:07:33.822175   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431b f2ea97a1 'd') m d775 at 0 mt 1765012051 l 4096 t 0 d 0 ext )
I1206 09:07:33.822542   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 1 
I1206 09:07:33.822579   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 
I1206 09:07:33.822705   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Topen tag 0 fid 1 mode 0
I1206 09:07:33.822751   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Ropen tag 0 qid (4431b f2ea97a1 'd') iounit 0
I1206 09:07:33.822896   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 0
I1206 09:07:33.822943   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431b f2ea97a1 'd') m d775 at 0 mt 1765012051 l 4096 t 0 d 0 ext )
I1206 09:07:33.823102   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 1 offset 0 count 262120
I1206 09:07:33.823212   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 258
I1206 09:07:33.823358   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 1 offset 258 count 261862
I1206 09:07:33.823401   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 0
I1206 09:07:33.823530   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 1 offset 258 count 262120
I1206 09:07:33.823558   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 0
I1206 09:07:33.823697   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 09:07:33.823734   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 (4431c f2ea97a1 '') 
I1206 09:07:33.823849   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.823894   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431c f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.824029   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.824059   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431c f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.824184   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 2
I1206 09:07:33.824205   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.824361   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 2 0:'test-1765012051876100393' 
I1206 09:07:33.824398   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 (4431e f2ea97a1 '') 
I1206 09:07:33.824526   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.824558   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('test-1765012051876100393' 'jenkins' 'jenkins' '' q (4431e f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.824687   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.824718   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('test-1765012051876100393' 'jenkins' 'jenkins' '' q (4431e f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.824841   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 2
I1206 09:07:33.824861   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.825006   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 09:07:33.825038   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rwalk tag 0 (4431d f2ea97a1 '') 
I1206 09:07:33.825163   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.825200   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431d f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.825338   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tstat tag 0 fid 2
I1206 09:07:33.825371   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431d f2ea97a1 '') m 644 at 0 mt 1765012051 l 24 t 0 d 0 ext )
I1206 09:07:33.825492   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 2
I1206 09:07:33.825510   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.825645   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tread tag 0 fid 1 offset 258 count 262120
I1206 09:07:33.825672   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rread tag 0 count 0
I1206 09:07:33.825805   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 1
I1206 09:07:33.825840   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:33.827025   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1206 09:07:33.827095   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rerror tag 0 ename 'file not found' ecode 0
I1206 09:07:34.098404   69686 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40714 Tclunk tag 0 fid 0
I1206 09:07:34.098466   69686 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40714 Rclunk tag 0
I1206 09:07:34.099749   69686 main.go:127] stdlog: ufs.go:147 disconnected
I1206 09:07:34.122508   69686 out.go:179] * Unmounting /mount-9p ...
I1206 09:07:34.125706   69686 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 09:07:34.133756   69686 mount.go:180] unmount for /mount-9p ran successfully
I1206 09:07:34.133867   69686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/.mount-process: {Name:mk4d653a380d58a3c9f3e8ffab3cde0b93da5379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:07:34.136975   69686 out.go:203] 
W1206 09:07:34.139989   69686 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1206 09:07:34.142913   69686 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (807.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-228904 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-228904 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.008454809s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-228904
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-228904: (3.273143392s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-228904 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-228904 status --format={{.Host}}: exit status 7 (97.508351ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-228904 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-228904 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m36.854565651s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-228904] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-228904" primary control-plane node in "kubernetes-upgrade-228904" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:37:22.083742  203272 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:37:22.083964  203272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:37:22.083986  203272 out.go:374] Setting ErrFile to fd 2...
	I1206 09:37:22.084007  203272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:37:22.084315  203272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:37:22.084753  203272 out.go:368] Setting JSON to false
	I1206 09:37:22.085751  203272 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4793,"bootTime":1765009049,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:37:22.085860  203272 start.go:143] virtualization:  
	I1206 09:37:22.089941  203272 out.go:179] * [kubernetes-upgrade-228904] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:37:22.093183  203272 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:37:22.093284  203272 notify.go:221] Checking for updates...
	I1206 09:37:22.100088  203272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:37:22.103112  203272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:37:22.106588  203272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:37:22.109973  203272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:37:22.113270  203272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:37:22.117355  203272 config.go:182] Loaded profile config "kubernetes-upgrade-228904": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1206 09:37:22.117999  203272 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:37:22.187853  203272 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:37:22.187979  203272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:37:22.364015  203272 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-06 09:37:22.346215913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:37:22.364113  203272 docker.go:319] overlay module found
	I1206 09:37:22.367941  203272 out.go:179] * Using the docker driver based on existing profile
	I1206 09:37:22.370956  203272 start.go:309] selected driver: docker
	I1206 09:37:22.370976  203272 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-228904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-228904 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:37:22.371057  203272 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:37:22.371780  203272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:37:22.513610  203272 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-06 09:37:22.498216032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:37:22.513949  203272 cni.go:84] Creating CNI manager for ""
	I1206 09:37:22.514014  203272 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:37:22.514048  203272 start.go:353] cluster config:
	{Name:kubernetes-upgrade-228904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-228904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:37:22.517859  203272 out.go:179] * Starting "kubernetes-upgrade-228904" primary control-plane node in "kubernetes-upgrade-228904" cluster
	I1206 09:37:22.522025  203272 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 09:37:22.525206  203272 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:37:22.528294  203272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:37:22.528335  203272 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 09:37:22.528345  203272 cache.go:65] Caching tarball of preloaded images
	I1206 09:37:22.528431  203272 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 09:37:22.528440  203272 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 09:37:22.528554  203272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/config.json ...
	I1206 09:37:22.528767  203272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:37:22.564608  203272 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:37:22.564634  203272 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:37:22.564649  203272 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:37:22.564693  203272 start.go:360] acquireMachinesLock for kubernetes-upgrade-228904: {Name:mkd7dc9c0e5a93b95e92b85b7364a155904b94c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:37:22.564752  203272 start.go:364] duration metric: took 36.653µs to acquireMachinesLock for "kubernetes-upgrade-228904"
	I1206 09:37:22.564776  203272 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:37:22.564785  203272 fix.go:54] fixHost starting: 
	I1206 09:37:22.565036  203272 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-228904 --format={{.State.Status}}
	I1206 09:37:22.631096  203272 fix.go:112] recreateIfNeeded on kubernetes-upgrade-228904: state=Stopped err=<nil>
	W1206 09:37:22.631123  203272 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:37:22.634574  203272 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-228904" ...
	I1206 09:37:22.634661  203272 cli_runner.go:164] Run: docker start kubernetes-upgrade-228904
	I1206 09:37:23.157984  203272 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-228904 --format={{.State.Status}}
	I1206 09:37:23.218096  203272 kic.go:430] container "kubernetes-upgrade-228904" state is running.
	I1206 09:37:23.218544  203272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-228904
	I1206 09:37:23.286196  203272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/config.json ...
	I1206 09:37:23.286433  203272 machine.go:94] provisionDockerMachine start ...
	I1206 09:37:23.286500  203272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228904
	I1206 09:37:23.338351  203272 main.go:143] libmachine: Using SSH client type: native
	I1206 09:37:23.338679  203272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1206 09:37:23.338688  203272 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:37:23.339544  203272 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37218->127.0.0.1:33018: read: connection reset by peer
	I1206 09:37:26.503967  203272 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-228904
	
	I1206 09:37:26.503993  203272 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-228904"
	I1206 09:37:26.504107  203272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228904
	I1206 09:37:26.528603  203272 main.go:143] libmachine: Using SSH client type: native
	I1206 09:37:26.528930  203272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1206 09:37:26.528951  203272 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-228904 && echo "kubernetes-upgrade-228904" | sudo tee /etc/hostname
	I1206 09:37:26.742649  203272 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-228904
	
	I1206 09:37:26.742758  203272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228904
	I1206 09:37:26.775608  203272 main.go:143] libmachine: Using SSH client type: native
	I1206 09:37:26.775940  203272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1206 09:37:26.775963  203272 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-228904' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-228904/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-228904' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:37:26.932208  203272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:37:26.932236  203272 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 09:37:26.932260  203272 ubuntu.go:190] setting up certificates
	I1206 09:37:26.932269  203272 provision.go:84] configureAuth start
	I1206 09:37:26.932343  203272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-228904
	I1206 09:37:26.956948  203272 provision.go:143] copyHostCerts
	I1206 09:37:26.957023  203272 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 09:37:26.957042  203272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 09:37:26.957105  203272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 09:37:26.957216  203272 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 09:37:26.957227  203272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 09:37:26.957250  203272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 09:37:26.957310  203272 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 09:37:26.957320  203272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 09:37:26.957341  203272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 09:37:26.957390  203272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-228904 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-228904 localhost minikube]
	I1206 09:37:27.334672  203272 provision.go:177] copyRemoteCerts
	I1206 09:37:27.334740  203272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:37:27.334794  203272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228904
	I1206 09:37:27.353595  203272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/kubernetes-upgrade-228904/id_rsa Username:docker}
	I1206 09:37:27.459580  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 09:37:27.489364  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 09:37:27.536207  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:37:27.573570  203272 provision.go:87] duration metric: took 641.273883ms to configureAuth
	I1206 09:37:27.573613  203272 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:37:27.573797  203272 config.go:182] Loaded profile config "kubernetes-upgrade-228904": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:37:27.573811  203272 machine.go:97] duration metric: took 4.287369874s to provisionDockerMachine
	I1206 09:37:27.573820  203272 start.go:293] postStartSetup for "kubernetes-upgrade-228904" (driver="docker")
	I1206 09:37:27.573834  203272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:37:27.573884  203272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:37:27.573935  203272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228904
	I1206 09:37:27.608967  203272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/kubernetes-upgrade-228904/id_rsa Username:docker}
	I1206 09:37:27.716209  203272 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:37:27.720288  203272 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:37:27.720318  203272 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:37:27.720331  203272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 09:37:27.720387  203272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 09:37:27.720470  203272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 09:37:27.720580  203272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:37:27.728851  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:37:27.748595  203272 start.go:296] duration metric: took 174.748876ms for postStartSetup
	I1206 09:37:27.748683  203272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:37:27.748744  203272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228904
	I1206 09:37:27.766755  203272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/kubernetes-upgrade-228904/id_rsa Username:docker}
	I1206 09:37:27.875700  203272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:37:27.881047  203272 fix.go:56] duration metric: took 5.31623841s for fixHost
	I1206 09:37:27.881074  203272 start.go:83] releasing machines lock for "kubernetes-upgrade-228904", held for 5.316309057s
	I1206 09:37:27.881173  203272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-228904
	I1206 09:37:27.902098  203272 ssh_runner.go:195] Run: cat /version.json
	I1206 09:37:27.902149  203272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228904
	I1206 09:37:27.902379  203272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:37:27.902431  203272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228904
	I1206 09:37:27.938809  203272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/kubernetes-upgrade-228904/id_rsa Username:docker}
	I1206 09:37:27.964150  203272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/kubernetes-upgrade-228904/id_rsa Username:docker}
	I1206 09:37:28.056087  203272 ssh_runner.go:195] Run: systemctl --version
	I1206 09:37:28.179159  203272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:37:28.184271  203272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:37:28.184347  203272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:37:28.194523  203272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:37:28.194543  203272 start.go:496] detecting cgroup driver to use...
	I1206 09:37:28.194572  203272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 09:37:28.194627  203272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:37:28.212385  203272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:37:28.238504  203272 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:37:28.238567  203272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:37:28.257677  203272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:37:28.288961  203272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:37:28.454667  203272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:37:28.599234  203272 docker.go:234] disabling docker service ...
	I1206 09:37:28.599299  203272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:37:28.615053  203272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:37:28.632133  203272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:37:28.792042  203272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:37:28.956194  203272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:37:28.974744  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:37:28.999047  203272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:37:29.015492  203272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:37:29.032102  203272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:37:29.032174  203272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:37:29.047542  203272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:37:29.058701  203272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:37:29.071461  203272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:37:29.086066  203272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:37:29.095898  203272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:37:29.105200  203272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:37:29.116244  203272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:37:29.127183  203272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:37:29.137201  203272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:37:29.146496  203272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:37:29.303341  203272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:37:29.523059  203272 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:37:29.523129  203272 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:37:29.527636  203272 start.go:564] Will wait 60s for crictl version
	I1206 09:37:29.527706  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:29.531947  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:37:29.565607  203272 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 09:37:29.565694  203272 ssh_runner.go:195] Run: containerd --version
	I1206 09:37:29.602201  203272 ssh_runner.go:195] Run: containerd --version
	I1206 09:37:29.630487  203272 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 09:37:29.633599  203272 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-228904 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:37:29.652325  203272 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 09:37:29.656832  203272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:37:29.666849  203272 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-228904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-228904 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:37:29.666954  203272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:37:29.667017  203272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:37:29.703483  203272 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 09:37:29.703553  203272 ssh_runner.go:195] Run: which lz4
	I1206 09:37:29.708018  203272 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 09:37:29.715719  203272 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 09:37:29.715756  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1206 09:37:32.829843  203272 containerd.go:563] duration metric: took 3.121886573s to copy over tarball
	I1206 09:37:32.829918  203272 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 09:37:35.087890  203272 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.257947645s)
	I1206 09:37:35.087990  203272 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1206 09:37:35.088079  203272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:37:35.130319  203272 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 09:37:35.130342  203272 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 09:37:35.130397  203272 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:37:35.130598  203272 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:37:35.130697  203272 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:37:35.130771  203272 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:37:35.130853  203272 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:37:35.130934  203272 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1206 09:37:35.131028  203272 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:37:35.131108  203272 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:37:35.134093  203272 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:37:35.134482  203272 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:37:35.134625  203272 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:37:35.134734  203272 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:37:35.134838  203272 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:37:35.135052  203272 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:37:35.135194  203272 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:37:35.135442  203272 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1206 09:37:35.503782  203272 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1206 09:37:35.503856  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:37:35.507524  203272 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1206 09:37:35.507597  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:37:35.523770  203272 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1206 09:37:35.523841  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:37:35.549087  203272 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1206 09:37:35.549167  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1206 09:37:35.552668  203272 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1206 09:37:35.552740  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1206 09:37:35.581470  203272 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1206 09:37:35.581540  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:37:35.604819  203272 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1206 09:37:35.604865  203272 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:37:35.604919  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:35.610968  203272 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1206 09:37:35.611042  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:37:35.620253  203272 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1206 09:37:35.620294  203272 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:37:35.620342  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:35.633415  203272 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1206 09:37:35.633460  203272 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:37:35.633508  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:35.637595  203272 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1206 09:37:35.637636  203272 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1206 09:37:35.637679  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:35.671082  203272 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1206 09:37:35.671132  203272 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:37:35.671181  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:35.693719  203272 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1206 09:37:35.693769  203272 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:37:35.693818  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:35.693906  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:37:35.704350  203272 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1206 09:37:35.704396  203272 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:37:35.704448  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:35.704549  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:37:35.704615  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:37:35.704676  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:37:35.704733  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:37:35.837689  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:37:35.837823  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:37:35.901520  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:37:35.901626  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:37:35.901701  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:37:35.901774  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:37:35.901848  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:37:35.962696  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:37:36.048266  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:37:36.128831  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:37:36.128918  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:37:36.128970  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:37:36.129026  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:37:36.129082  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:37:36.129146  203272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 09:37:36.263187  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:37:36.289660  203272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1206 09:37:36.289767  203272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1206 09:37:36.289863  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:37:36.289919  203272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1206 09:37:36.289969  203272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1206 09:37:36.290016  203272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:37:36.290072  203272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 09:37:36.333174  203272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 09:37:36.346395  203272 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1206 09:37:36.346453  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1206 09:37:36.346520  203272 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1206 09:37:36.346539  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1206 09:37:36.346598  203272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 09:37:36.386335  203272 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1206 09:37:36.386406  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1206 09:37:36.602657  203272 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1206 09:37:36.602814  203272 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1206 09:37:36.602888  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:37:36.669543  203272 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:37:36.669617  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:37:36.713665  203272 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1206 09:37:36.713711  203272 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:37:36.713759  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:37.552013  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:37:37.701522  203272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 09:37:37.701691  203272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:37:37.706336  203272 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1206 09:37:37.706427  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1206 09:37:37.803738  203272 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:37:37.803854  203272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:37:38.304134  203272 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 09:37:38.304245  203272 cache_images.go:94] duration metric: took 3.173890654s to LoadCachedImages
	W1206 09:37:38.304333  203272 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0: no such file or directory
	I1206 09:37:38.304495  203272 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 09:37:38.304668  203272 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-228904 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-228904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:37:38.304770  203272 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:37:38.332221  203272 cni.go:84] Creating CNI manager for ""
	I1206 09:37:38.332246  203272 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:37:38.332264  203272 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:37:38.332295  203272 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-228904 NodeName:kubernetes-upgrade-228904 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:37:38.332423  203272 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-228904"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:37:38.332502  203272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:37:38.343718  203272 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:37:38.343795  203272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:37:38.352506  203272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1206 09:37:38.371790  203272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:37:38.386864  203272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1206 09:37:38.404535  203272 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:37:38.408836  203272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:37:38.428977  203272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:37:38.615030  203272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:37:38.644202  203272 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904 for IP: 192.168.76.2
	I1206 09:37:38.644267  203272 certs.go:195] generating shared ca certs ...
	I1206 09:37:38.644304  203272 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:37:38.644478  203272 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 09:37:38.644551  203272 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 09:37:38.644586  203272 certs.go:257] generating profile certs ...
	I1206 09:37:38.644714  203272 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/client.key
	I1206 09:37:38.644829  203272 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/apiserver.key.ffb1b80b
	I1206 09:37:38.644918  203272 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/proxy-client.key
	I1206 09:37:38.645069  203272 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 09:37:38.645126  203272 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 09:37:38.645149  203272 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:37:38.645223  203272 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 09:37:38.645277  203272 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:37:38.645337  203272 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 09:37:38.645414  203272 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:37:38.646118  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:37:38.694066  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:37:38.721968  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:37:38.749115  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:37:38.771905  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 09:37:38.794893  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:37:38.815907  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:37:38.837190  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:37:38.858706  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:37:38.881447  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 09:37:38.902684  203272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 09:37:38.922873  203272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:37:38.936346  203272 ssh_runner.go:195] Run: openssl version
	I1206 09:37:38.945472  203272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:37:38.954767  203272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:37:38.963411  203272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:37:38.968481  203272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:37:38.968640  203272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:37:39.011066  203272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:37:39.019059  203272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 09:37:39.026881  203272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 09:37:39.035068  203272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 09:37:39.039577  203272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 09:37:39.039681  203272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 09:37:39.082782  203272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:37:39.090433  203272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 09:37:39.098022  203272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 09:37:39.106016  203272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 09:37:39.110550  203272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 09:37:39.110665  203272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 09:37:39.154292  203272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:37:39.161868  203272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:37:39.166001  203272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:37:39.208403  203272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:37:39.250945  203272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:37:39.294170  203272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:37:39.336309  203272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:37:39.378321  203272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:37:39.423971  203272 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-228904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-228904 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:37:39.424110  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:37:39.424212  203272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:37:39.461238  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:37:39.461308  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:37:39.461327  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:37:39.461342  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:37:39.461361  203272 cri.go:89] found id: ""
	I1206 09:37:39.461434  203272 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1206 09:37:39.492966  203272 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:37:39Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1206 09:37:39.493085  203272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:37:39.511984  203272 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:37:39.512051  203272 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:37:39.512147  203272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:37:39.532540  203272 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:37:39.533023  203272 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-228904" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:37:39.533187  203272 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-228904" cluster setting kubeconfig missing "kubernetes-upgrade-228904" context setting]
	I1206 09:37:39.533521  203272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:37:39.534097  203272 kapi.go:59] client config for kubernetes-upgrade-228904: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/client.key", CAFile:"/home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:37:39.534889  203272 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 09:37:39.534978  203272 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 09:37:39.535012  203272 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 09:37:39.535039  203272 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 09:37:39.535060  203272 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 09:37:39.535366  203272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:37:39.558085  203272 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 09:36:54.416595385 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 09:37:38.397269615 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-228904"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1206 09:37:39.558150  203272 kubeadm.go:1161] stopping kube-system containers ...
	I1206 09:37:39.558178  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 09:37:39.558273  203272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:37:39.589593  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:37:39.589658  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:37:39.589676  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:37:39.589694  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:37:39.589713  203272 cri.go:89] found id: ""
	I1206 09:37:39.589749  203272 cri.go:252] Stopping containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:37:39.589821  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:37:39.594038  203272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599
	I1206 09:37:39.625974  203272 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 09:37:39.642772  203272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:37:39.652644  203272 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec  6 09:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec  6 09:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  6 09:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec  6 09:37 /etc/kubernetes/scheduler.conf
	
	I1206 09:37:39.652767  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:37:39.663018  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:37:39.673620  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:37:39.683151  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:37:39.683267  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:37:39.692605  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:37:39.701433  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:37:39.701557  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:37:39.709520  203272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:37:39.718595  203272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:37:39.786973  203272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:37:41.180403  203272 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.393336502s)
	I1206 09:37:41.180524  203272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:37:41.529075  203272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:37:41.677047  203272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:37:41.754034  203272 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:37:41.754190  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:42.255024  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:42.754336  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:43.254984  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:43.754326  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:44.254841  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:44.755206  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:45.254428  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:45.754290  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:46.254785  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:46.755103  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:47.254991  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:47.754948  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:48.255248  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:48.754288  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:49.254602  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:49.754289  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:50.254584  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:50.755184  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:51.254977  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:51.754302  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:52.262844  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:52.754759  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:53.254417  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:53.754311  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:54.255220  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:54.754353  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:55.254812  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:55.754295  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:56.254282  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:56.754831  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:57.254908  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:57.755189  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:58.255232  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:58.754343  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:59.255019  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:37:59.755155  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:00.254327  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:00.754397  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:01.254485  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:01.755139  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:02.254753  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:02.754347  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:03.254983  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:03.754297  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:04.255149  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:04.754349  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:05.254832  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:05.754668  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:06.254728  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:06.754845  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:07.254292  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:07.754282  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:08.255162  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:08.754287  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:09.254305  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:09.754905  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:10.254393  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:10.754947  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:11.254788  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:11.754692  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:12.254863  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:12.755240  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:13.254337  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:13.755007  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:14.255108  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:14.754297  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:15.254279  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:15.754517  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:16.254438  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:16.754294  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:17.254274  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:17.754925  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:18.254314  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:18.754449  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:19.254315  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:19.755014  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:20.254343  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:20.754286  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:21.254402  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:21.754548  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:22.255289  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:22.754430  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:23.254777  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:23.754299  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:24.254309  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:24.755086  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:25.254808  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:25.754671  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:26.254604  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:26.754857  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:27.254980  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:27.754520  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:28.255214  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:28.755004  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:29.254276  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:29.755122  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:30.255117  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:30.754292  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:31.254289  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:31.754246  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:32.254901  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:32.754503  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:33.255221  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:33.754916  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:34.254294  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:34.754797  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:35.255251  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:35.754291  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:36.254272  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:36.754736  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:37.255098  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:37.754280  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:38.254295  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:38.754412  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:39.254273  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:39.754919  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:40.254798  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:40.754931  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:41.254267  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:41.754451  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:38:41.754527  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:38:41.812526  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:41.812551  203272 cri.go:89] found id: ""
	I1206 09:38:41.812560  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:38:41.812615  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:41.824217  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:38:41.824295  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:38:41.859690  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:41.859717  203272 cri.go:89] found id: ""
	I1206 09:38:41.859725  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:38:41.859779  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:41.864559  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:38:41.864642  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:38:41.899914  203272 cri.go:89] found id: ""
	I1206 09:38:41.899936  203272 logs.go:282] 0 containers: []
	W1206 09:38:41.899945  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:38:41.899951  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:38:41.900007  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:38:41.937858  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:41.937877  203272 cri.go:89] found id: ""
	I1206 09:38:41.937885  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:38:41.937948  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:41.942833  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:38:41.942899  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:38:41.975759  203272 cri.go:89] found id: ""
	I1206 09:38:41.975783  203272 logs.go:282] 0 containers: []
	W1206 09:38:41.975792  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:38:41.975799  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:38:41.975870  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:38:42.011301  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:42.011323  203272 cri.go:89] found id: ""
	I1206 09:38:42.011331  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:38:42.011441  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:42.017048  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:38:42.017204  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:38:42.048363  203272 cri.go:89] found id: ""
	I1206 09:38:42.048447  203272 logs.go:282] 0 containers: []
	W1206 09:38:42.048470  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:38:42.048490  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:38:42.048596  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:38:42.078359  203272 cri.go:89] found id: ""
	I1206 09:38:42.078472  203272 logs.go:282] 0 containers: []
	W1206 09:38:42.078512  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:38:42.078541  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:38:42.078601  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:38:42.154890  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:38:42.155009  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:42.223074  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:38:42.223116  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:38:42.278928  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:38:42.279011  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:38:42.295780  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:38:42.295856  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:38:42.380442  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:38:42.380507  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:38:42.380536  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:42.426419  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:38:42.426491  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:42.485303  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:38:42.485379  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:42.535223  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:38:42.535430  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:38:45.109748  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:45.126728  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:38:45.126905  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:38:45.173984  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:45.174008  203272 cri.go:89] found id: ""
	I1206 09:38:45.174017  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:38:45.174080  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:45.184197  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:38:45.184281  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:38:45.240144  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:45.240178  203272 cri.go:89] found id: ""
	I1206 09:38:45.240228  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:38:45.240342  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:45.249681  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:38:45.249899  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:38:45.312774  203272 cri.go:89] found id: ""
	I1206 09:38:45.312802  203272 logs.go:282] 0 containers: []
	W1206 09:38:45.312811  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:38:45.312818  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:38:45.312885  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:38:45.378096  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:45.378122  203272 cri.go:89] found id: ""
	I1206 09:38:45.378131  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:38:45.378186  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:45.384893  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:38:45.384968  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:38:45.435115  203272 cri.go:89] found id: ""
	I1206 09:38:45.435140  203272 logs.go:282] 0 containers: []
	W1206 09:38:45.435151  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:38:45.435157  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:38:45.435217  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:38:45.514708  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:45.514781  203272 cri.go:89] found id: ""
	I1206 09:38:45.514804  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:38:45.514899  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:45.524399  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:38:45.524551  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:38:45.606207  203272 cri.go:89] found id: ""
	I1206 09:38:45.606232  203272 logs.go:282] 0 containers: []
	W1206 09:38:45.606242  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:38:45.606248  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:38:45.606309  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:38:45.638205  203272 cri.go:89] found id: ""
	I1206 09:38:45.638237  203272 logs.go:282] 0 containers: []
	W1206 09:38:45.638246  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:38:45.638261  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:38:45.638276  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:45.681733  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:38:45.681807  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:45.726413  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:38:45.726501  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:45.790282  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:38:45.790315  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:38:45.865592  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:38:45.865637  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:45.908679  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:38:45.908712  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:38:45.947693  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:38:45.947726  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:38:45.981014  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:38:45.981042  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:38:45.994488  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:38:45.994518  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:38:46.089160  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:38:48.589682  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:48.604922  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:38:48.604998  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:38:48.639306  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:48.639327  203272 cri.go:89] found id: ""
	I1206 09:38:48.639336  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:38:48.639447  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:48.644274  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:38:48.644346  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:38:48.691240  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:48.691260  203272 cri.go:89] found id: ""
	I1206 09:38:48.691269  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:38:48.691320  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:48.695532  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:38:48.695613  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:38:48.735326  203272 cri.go:89] found id: ""
	I1206 09:38:48.735349  203272 logs.go:282] 0 containers: []
	W1206 09:38:48.735358  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:38:48.735365  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:38:48.735478  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:38:48.800429  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:48.800449  203272 cri.go:89] found id: ""
	I1206 09:38:48.800456  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:38:48.800510  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:48.809502  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:38:48.809584  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:38:48.849745  203272 cri.go:89] found id: ""
	I1206 09:38:48.849768  203272 logs.go:282] 0 containers: []
	W1206 09:38:48.849776  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:38:48.849783  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:38:48.849842  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:38:48.893678  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:48.893697  203272 cri.go:89] found id: ""
	I1206 09:38:48.893706  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:38:48.893759  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:48.897646  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:38:48.897711  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:38:48.947621  203272 cri.go:89] found id: ""
	I1206 09:38:48.947645  203272 logs.go:282] 0 containers: []
	W1206 09:38:48.947654  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:38:48.947660  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:38:48.947725  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:38:49.018044  203272 cri.go:89] found id: ""
	I1206 09:38:49.018065  203272 logs.go:282] 0 containers: []
	W1206 09:38:49.018074  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:38:49.018089  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:38:49.018101  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:49.083544  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:38:49.083576  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:38:49.100265  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:38:49.100291  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:49.148422  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:38:49.148500  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:38:49.189118  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:38:49.189197  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:38:49.312550  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:38:49.312630  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:38:49.392267  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:38:49.392345  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:38:49.495804  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:38:49.495825  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:38:49.495840  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:49.546611  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:38:49.546685  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:52.107308  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:52.118672  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:38:52.118746  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:38:52.161742  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:52.161765  203272 cri.go:89] found id: ""
	I1206 09:38:52.161774  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:38:52.161832  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:52.166454  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:38:52.166525  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:38:52.210532  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:52.210559  203272 cri.go:89] found id: ""
	I1206 09:38:52.210567  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:38:52.210634  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:52.215094  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:38:52.215196  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:38:52.256607  203272 cri.go:89] found id: ""
	I1206 09:38:52.256635  203272 logs.go:282] 0 containers: []
	W1206 09:38:52.256645  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:38:52.256651  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:38:52.256711  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:38:52.301477  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:52.301502  203272 cri.go:89] found id: ""
	I1206 09:38:52.301511  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:38:52.301572  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:52.307816  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:38:52.307888  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:38:52.365673  203272 cri.go:89] found id: ""
	I1206 09:38:52.365699  203272 logs.go:282] 0 containers: []
	W1206 09:38:52.365708  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:38:52.365715  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:38:52.365772  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:38:52.401281  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:52.401343  203272 cri.go:89] found id: ""
	I1206 09:38:52.401368  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:38:52.401438  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:52.406896  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:38:52.407014  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:38:52.446601  203272 cri.go:89] found id: ""
	I1206 09:38:52.446672  203272 logs.go:282] 0 containers: []
	W1206 09:38:52.446695  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:38:52.446714  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:38:52.446795  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:38:52.485695  203272 cri.go:89] found id: ""
	I1206 09:38:52.485769  203272 logs.go:282] 0 containers: []
	W1206 09:38:52.485794  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:38:52.485819  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:38:52.485843  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:52.528258  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:38:52.528289  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:52.574843  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:38:52.574876  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:52.615199  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:38:52.615229  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:38:52.649709  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:38:52.649744  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:38:52.720844  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:38:52.720889  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:52.761846  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:38:52.761878  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:38:52.791309  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:38:52.791351  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:38:52.804971  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:38:52.804997  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:38:52.870475  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:38:55.371736  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:55.383936  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:38:55.384006  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:38:55.415800  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:55.415820  203272 cri.go:89] found id: ""
	I1206 09:38:55.415828  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:38:55.415892  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:55.420674  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:38:55.420755  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:38:55.449697  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:55.449716  203272 cri.go:89] found id: ""
	I1206 09:38:55.449724  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:38:55.449779  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:55.454364  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:38:55.454479  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:38:55.481796  203272 cri.go:89] found id: ""
	I1206 09:38:55.481867  203272 logs.go:282] 0 containers: []
	W1206 09:38:55.481890  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:38:55.481909  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:38:55.481984  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:38:55.512183  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:55.512246  203272 cri.go:89] found id: ""
	I1206 09:38:55.512272  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:38:55.512339  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:55.516269  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:38:55.516352  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:38:55.546457  203272 cri.go:89] found id: ""
	I1206 09:38:55.546483  203272 logs.go:282] 0 containers: []
	W1206 09:38:55.546491  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:38:55.546498  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:38:55.546557  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:38:55.572781  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:55.572858  203272 cri.go:89] found id: ""
	I1206 09:38:55.572872  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:38:55.572942  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:55.576659  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:38:55.576726  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:38:55.601603  203272 cri.go:89] found id: ""
	I1206 09:38:55.601639  203272 logs.go:282] 0 containers: []
	W1206 09:38:55.601649  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:38:55.601674  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:38:55.601759  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:38:55.627699  203272 cri.go:89] found id: ""
	I1206 09:38:55.627765  203272 logs.go:282] 0 containers: []
	W1206 09:38:55.627792  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:38:55.627817  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:38:55.627835  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:38:55.686042  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:38:55.686077  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:55.721078  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:38:55.721113  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:55.762113  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:38:55.762148  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:55.793555  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:38:55.793586  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:38:55.821850  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:38:55.821878  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:38:55.835297  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:38:55.835327  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:38:55.898014  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:38:55.898092  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:38:55.898120  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:55.938825  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:38:55.938857  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:38:58.468599  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:38:58.478508  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:38:58.478597  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:38:58.504609  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:58.504635  203272 cri.go:89] found id: ""
	I1206 09:38:58.504644  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:38:58.504713  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:58.508441  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:38:58.508513  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:38:58.536937  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:58.536961  203272 cri.go:89] found id: ""
	I1206 09:38:58.536970  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:38:58.537022  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:58.540585  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:38:58.540694  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:38:58.566778  203272 cri.go:89] found id: ""
	I1206 09:38:58.566853  203272 logs.go:282] 0 containers: []
	W1206 09:38:58.566877  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:38:58.566896  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:38:58.566976  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:38:58.593005  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:58.593028  203272 cri.go:89] found id: ""
	I1206 09:38:58.593036  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:38:58.593108  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:58.597008  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:38:58.597111  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:38:58.635732  203272 cri.go:89] found id: ""
	I1206 09:38:58.635757  203272 logs.go:282] 0 containers: []
	W1206 09:38:58.635766  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:38:58.635772  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:38:58.635837  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:38:58.665097  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:58.665120  203272 cri.go:89] found id: ""
	I1206 09:38:58.665128  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:38:58.665212  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:38:58.669085  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:38:58.669155  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:38:58.695324  203272 cri.go:89] found id: ""
	I1206 09:38:58.695345  203272 logs.go:282] 0 containers: []
	W1206 09:38:58.695353  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:38:58.695361  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:38:58.695465  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:38:58.724746  203272 cri.go:89] found id: ""
	I1206 09:38:58.724769  203272 logs.go:282] 0 containers: []
	W1206 09:38:58.724778  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:38:58.724791  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:38:58.724802  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:38:58.755025  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:38:58.755098  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:38:58.823173  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:38:58.823197  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:38:58.823212  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:38:58.866682  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:38:58.866715  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:38:58.904295  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:38:58.904327  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:38:58.936015  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:38:58.936046  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:38:59.000057  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:38:59.000102  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:38:59.016283  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:38:59.016312  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:38:59.054322  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:38:59.054363  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:01.587000  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:01.597560  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:01.597680  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:01.623955  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:01.623978  203272 cri.go:89] found id: ""
	I1206 09:39:01.623997  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:01.624071  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:01.627735  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:01.627833  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:01.655120  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:01.655184  203272 cri.go:89] found id: ""
	I1206 09:39:01.655205  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:01.655273  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:01.659338  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:01.659534  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:01.687160  203272 cri.go:89] found id: ""
	I1206 09:39:01.687228  203272 logs.go:282] 0 containers: []
	W1206 09:39:01.687256  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:01.687277  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:01.687346  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:01.713646  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:01.713713  203272 cri.go:89] found id: ""
	I1206 09:39:01.713728  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:01.713785  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:01.717751  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:01.717826  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:01.743829  203272 cri.go:89] found id: ""
	I1206 09:39:01.743897  203272 logs.go:282] 0 containers: []
	W1206 09:39:01.743922  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:01.743939  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:01.744030  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:01.771110  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:01.771136  203272 cri.go:89] found id: ""
	I1206 09:39:01.771145  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:01.771202  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:01.775116  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:01.775272  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:01.800528  203272 cri.go:89] found id: ""
	I1206 09:39:01.800551  203272 logs.go:282] 0 containers: []
	W1206 09:39:01.800560  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:01.800566  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:01.800625  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:01.826205  203272 cri.go:89] found id: ""
	I1206 09:39:01.826230  203272 logs.go:282] 0 containers: []
	W1206 09:39:01.826240  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:01.826271  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:01.826287  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:01.887478  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:01.887514  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:01.901469  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:01.901497  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:01.938176  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:01.938207  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:01.995879  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:01.995913  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:02.090185  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:02.090213  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:02.090226  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:02.145715  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:02.145748  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:02.189743  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:02.189777  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:02.221796  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:02.221835  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:04.753239  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:04.763474  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:04.763607  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:04.793538  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:04.793619  203272 cri.go:89] found id: ""
	I1206 09:39:04.793650  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:04.793748  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:04.797915  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:04.798034  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:04.824685  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:04.824763  203272 cri.go:89] found id: ""
	I1206 09:39:04.824786  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:04.824863  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:04.828954  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:04.829064  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:04.855536  203272 cri.go:89] found id: ""
	I1206 09:39:04.855617  203272 logs.go:282] 0 containers: []
	W1206 09:39:04.855641  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:04.855660  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:04.855759  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:04.882714  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:04.882783  203272 cri.go:89] found id: ""
	I1206 09:39:04.882806  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:04.882886  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:04.888058  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:04.888175  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:04.926502  203272 cri.go:89] found id: ""
	I1206 09:39:04.926577  203272 logs.go:282] 0 containers: []
	W1206 09:39:04.926600  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:04.926617  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:04.926705  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:04.959294  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:04.959367  203272 cri.go:89] found id: ""
	I1206 09:39:04.959413  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:04.959495  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:04.963801  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:04.963926  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:05.040254  203272 cri.go:89] found id: ""
	I1206 09:39:05.040331  203272 logs.go:282] 0 containers: []
	W1206 09:39:05.040355  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:05.040374  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:05.040461  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:05.077510  203272 cri.go:89] found id: ""
	I1206 09:39:05.077535  203272 logs.go:282] 0 containers: []
	W1206 09:39:05.077544  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:05.077557  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:05.077570  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:05.114379  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:05.114417  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:05.146711  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:05.146738  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:05.161482  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:05.161566  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:05.226311  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:05.226345  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:05.263295  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:05.263341  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:05.297090  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:05.297163  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:05.329056  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:05.329135  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:05.392670  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:05.392707  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:05.484507  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:07.985159  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:08.012044  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:08.012169  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:08.086634  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:08.086656  203272 cri.go:89] found id: ""
	I1206 09:39:08.086664  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:08.086719  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:08.094561  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:08.094632  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:08.169173  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:08.169192  203272 cri.go:89] found id: ""
	I1206 09:39:08.169199  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:08.169256  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:08.174439  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:08.174511  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:08.252199  203272 cri.go:89] found id: ""
	I1206 09:39:08.252273  203272 logs.go:282] 0 containers: []
	W1206 09:39:08.252297  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:08.252316  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:08.252401  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:08.289409  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:08.289428  203272 cri.go:89] found id: ""
	I1206 09:39:08.289436  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:08.289498  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:08.293972  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:08.294107  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:08.341178  203272 cri.go:89] found id: ""
	I1206 09:39:08.341253  203272 logs.go:282] 0 containers: []
	W1206 09:39:08.341278  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:08.341298  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:08.341380  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:08.420547  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:08.420621  203272 cri.go:89] found id: ""
	I1206 09:39:08.420644  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:08.420726  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:08.429395  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:08.429515  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:08.483865  203272 cri.go:89] found id: ""
	I1206 09:39:08.483929  203272 logs.go:282] 0 containers: []
	W1206 09:39:08.483961  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:08.483984  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:08.484071  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:08.529700  203272 cri.go:89] found id: ""
	I1206 09:39:08.529763  203272 logs.go:282] 0 containers: []
	W1206 09:39:08.529793  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:08.529817  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:08.529841  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:08.583866  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:08.587476  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:08.667148  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:08.667227  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:08.750770  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:08.750844  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:08.797672  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:08.797710  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:08.890458  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:08.890643  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:08.907466  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:08.907564  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:09.043997  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:09.044019  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:09.044035  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:09.101297  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:09.101344  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:11.663985  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:11.676498  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:11.676601  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:11.701851  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:11.701871  203272 cri.go:89] found id: ""
	I1206 09:39:11.701879  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:11.701935  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:11.705292  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:11.705360  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:11.730918  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:11.730942  203272 cri.go:89] found id: ""
	I1206 09:39:11.730952  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:11.731030  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:11.734556  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:11.734646  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:11.759933  203272 cri.go:89] found id: ""
	I1206 09:39:11.759959  203272 logs.go:282] 0 containers: []
	W1206 09:39:11.759969  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:11.759976  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:11.760033  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:11.785266  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:11.785288  203272 cri.go:89] found id: ""
	I1206 09:39:11.785297  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:11.785374  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:11.789047  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:11.789168  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:11.814877  203272 cri.go:89] found id: ""
	I1206 09:39:11.814916  203272 logs.go:282] 0 containers: []
	W1206 09:39:11.814926  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:11.814932  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:11.815010  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:11.844987  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:11.845009  203272 cri.go:89] found id: ""
	I1206 09:39:11.845017  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:11.845074  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:11.848750  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:11.848892  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:11.874912  203272 cri.go:89] found id: ""
	I1206 09:39:11.874939  203272 logs.go:282] 0 containers: []
	W1206 09:39:11.874948  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:11.874955  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:11.875015  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:11.900854  203272 cri.go:89] found id: ""
	I1206 09:39:11.900923  203272 logs.go:282] 0 containers: []
	W1206 09:39:11.900947  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:11.900968  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:11.900980  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:11.939458  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:11.939489  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:11.978419  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:11.978454  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:12.024739  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:12.024774  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:12.069667  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:12.069703  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:12.099821  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:12.099848  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:12.161253  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:12.161289  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:12.173914  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:12.173946  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:12.237253  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:12.237273  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:12.237285  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:14.767523  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:14.777806  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:14.777878  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:14.804032  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:14.804053  203272 cri.go:89] found id: ""
	I1206 09:39:14.804062  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:14.804119  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:14.807797  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:14.807876  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:14.832707  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:14.832729  203272 cri.go:89] found id: ""
	I1206 09:39:14.832737  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:14.832792  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:14.836574  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:14.836648  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:14.861322  203272 cri.go:89] found id: ""
	I1206 09:39:14.861344  203272 logs.go:282] 0 containers: []
	W1206 09:39:14.861353  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:14.861359  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:14.861440  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:14.885750  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:14.885778  203272 cri.go:89] found id: ""
	I1206 09:39:14.885787  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:14.885840  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:14.889434  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:14.889505  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:14.914896  203272 cri.go:89] found id: ""
	I1206 09:39:14.914919  203272 logs.go:282] 0 containers: []
	W1206 09:39:14.914929  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:14.914935  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:14.914991  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:14.941148  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:14.941169  203272 cri.go:89] found id: ""
	I1206 09:39:14.941178  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:14.941231  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:14.944797  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:14.944869  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:14.984350  203272 cri.go:89] found id: ""
	I1206 09:39:14.984377  203272 logs.go:282] 0 containers: []
	W1206 09:39:14.984386  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:14.984393  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:14.984462  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:15.052668  203272 cri.go:89] found id: ""
	I1206 09:39:15.052693  203272 logs.go:282] 0 containers: []
	W1206 09:39:15.052709  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:15.052747  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:15.052767  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:15.071020  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:15.071100  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:15.114284  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:15.114317  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:15.147611  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:15.147641  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:15.187817  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:15.187850  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:15.221759  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:15.221790  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:15.250542  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:15.250578  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:15.280185  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:15.280214  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:15.339044  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:15.339076  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:15.402710  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:17.902906  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:17.913205  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:17.913282  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:17.937623  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:17.937644  203272 cri.go:89] found id: ""
	I1206 09:39:17.937653  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:17.937717  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:17.941233  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:17.941301  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:17.966747  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:17.966769  203272 cri.go:89] found id: ""
	I1206 09:39:17.966778  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:17.966833  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:17.970942  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:17.971011  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:18.012400  203272 cri.go:89] found id: ""
	I1206 09:39:18.012428  203272 logs.go:282] 0 containers: []
	W1206 09:39:18.012437  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:18.012444  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:18.012506  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:18.048780  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:18.048802  203272 cri.go:89] found id: ""
	I1206 09:39:18.048811  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:18.048867  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:18.052746  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:18.052829  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:18.082078  203272 cri.go:89] found id: ""
	I1206 09:39:18.082100  203272 logs.go:282] 0 containers: []
	W1206 09:39:18.082109  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:18.082116  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:18.082175  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:18.107061  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:18.107127  203272 cri.go:89] found id: ""
	I1206 09:39:18.107149  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:18.107242  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:18.111023  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:18.111106  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:18.135996  203272 cri.go:89] found id: ""
	I1206 09:39:18.136017  203272 logs.go:282] 0 containers: []
	W1206 09:39:18.136026  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:18.136033  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:18.136094  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:18.165274  203272 cri.go:89] found id: ""
	I1206 09:39:18.165299  203272 logs.go:282] 0 containers: []
	W1206 09:39:18.165308  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:18.165321  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:18.165351  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:18.226647  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:18.226689  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:18.226702  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:18.268721  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:18.268754  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:18.302616  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:18.302648  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:18.362464  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:18.362500  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:18.375795  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:18.375826  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:18.412603  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:18.412636  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:18.443238  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:18.443268  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:18.473148  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:18.473181  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:21.004248  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:21.020514  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:21.020607  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:21.046730  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:21.046753  203272 cri.go:89] found id: ""
	I1206 09:39:21.046766  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:21.046840  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:21.050553  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:21.050625  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:21.074780  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:21.074804  203272 cri.go:89] found id: ""
	I1206 09:39:21.074813  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:21.074864  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:21.078590  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:21.078669  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:21.109560  203272 cri.go:89] found id: ""
	I1206 09:39:21.109584  203272 logs.go:282] 0 containers: []
	W1206 09:39:21.109592  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:21.109603  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:21.109663  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:21.135005  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:21.135075  203272 cri.go:89] found id: ""
	I1206 09:39:21.135103  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:21.135171  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:21.138975  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:21.139050  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:21.163313  203272 cri.go:89] found id: ""
	I1206 09:39:21.163338  203272 logs.go:282] 0 containers: []
	W1206 09:39:21.163347  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:21.163353  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:21.163473  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:21.189139  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:21.189172  203272 cri.go:89] found id: ""
	I1206 09:39:21.189181  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:21.189236  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:21.192806  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:21.192878  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:21.218125  203272 cri.go:89] found id: ""
	I1206 09:39:21.218149  203272 logs.go:282] 0 containers: []
	W1206 09:39:21.218157  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:21.218164  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:21.218226  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:21.247349  203272 cri.go:89] found id: ""
	I1206 09:39:21.247371  203272 logs.go:282] 0 containers: []
	W1206 09:39:21.247414  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:21.247431  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:21.247446  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:21.307639  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:21.307659  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:21.307672  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:21.342754  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:21.342784  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:21.385237  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:21.385315  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:21.419956  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:21.419991  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:21.454434  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:21.454463  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:21.484040  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:21.484065  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:21.543941  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:21.543973  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:21.574103  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:21.574133  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:24.087580  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:24.098247  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:24.098323  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:24.123821  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:24.123843  203272 cri.go:89] found id: ""
	I1206 09:39:24.123851  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:24.123904  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:24.127846  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:24.127923  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:24.157660  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:24.157683  203272 cri.go:89] found id: ""
	I1206 09:39:24.157690  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:24.157752  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:24.161352  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:24.161427  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:24.185533  203272 cri.go:89] found id: ""
	I1206 09:39:24.185597  203272 logs.go:282] 0 containers: []
	W1206 09:39:24.185610  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:24.185617  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:24.185678  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:24.210795  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:24.210818  203272 cri.go:89] found id: ""
	I1206 09:39:24.210827  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:24.210885  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:24.214485  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:24.214556  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:24.238319  203272 cri.go:89] found id: ""
	I1206 09:39:24.238344  203272 logs.go:282] 0 containers: []
	W1206 09:39:24.238353  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:24.238359  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:24.238434  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:24.268095  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:24.268118  203272 cri.go:89] found id: ""
	I1206 09:39:24.268127  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:24.268193  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:24.271937  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:24.272016  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:24.296704  203272 cri.go:89] found id: ""
	I1206 09:39:24.296728  203272 logs.go:282] 0 containers: []
	W1206 09:39:24.296737  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:24.296744  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:24.296802  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:24.326349  203272 cri.go:89] found id: ""
	I1206 09:39:24.326385  203272 logs.go:282] 0 containers: []
	W1206 09:39:24.326395  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:24.326409  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:24.326421  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:24.369806  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:24.369836  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:24.398919  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:24.398956  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:24.445745  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:24.445772  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:24.504036  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:24.504069  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:24.570972  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:24.570996  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:24.571010  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:24.605029  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:24.605061  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:24.618011  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:24.618046  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:24.653880  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:24.653911  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:27.191481  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:27.201935  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:27.202005  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:27.227154  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:27.227225  203272 cri.go:89] found id: ""
	I1206 09:39:27.227263  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:27.227353  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:27.231107  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:27.231175  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:27.256521  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:27.256543  203272 cri.go:89] found id: ""
	I1206 09:39:27.256551  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:27.256602  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:27.260212  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:27.260281  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:27.285155  203272 cri.go:89] found id: ""
	I1206 09:39:27.285187  203272 logs.go:282] 0 containers: []
	W1206 09:39:27.285196  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:27.285203  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:27.285260  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:27.309968  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:27.309990  203272 cri.go:89] found id: ""
	I1206 09:39:27.309998  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:27.310053  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:27.313626  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:27.313700  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:27.342839  203272 cri.go:89] found id: ""
	I1206 09:39:27.342863  203272 logs.go:282] 0 containers: []
	W1206 09:39:27.342872  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:27.342893  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:27.342963  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:27.368931  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:27.368953  203272 cri.go:89] found id: ""
	I1206 09:39:27.368961  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:27.369039  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:27.372707  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:27.372826  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:27.397043  203272 cri.go:89] found id: ""
	I1206 09:39:27.397066  203272 logs.go:282] 0 containers: []
	W1206 09:39:27.397075  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:27.397081  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:27.397146  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:27.424402  203272 cri.go:89] found id: ""
	I1206 09:39:27.424445  203272 logs.go:282] 0 containers: []
	W1206 09:39:27.424454  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:27.424470  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:27.424482  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:27.460624  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:27.460660  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:27.493425  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:27.493454  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:27.553315  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:27.553352  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:27.598087  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:27.598118  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:27.628504  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:27.628538  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:27.657629  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:27.657660  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:27.671010  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:27.671036  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:27.738937  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:27.738960  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:27.738973  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:30.276245  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:30.287833  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:30.287928  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:30.315639  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:30.315664  203272 cri.go:89] found id: ""
	I1206 09:39:30.315674  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:30.315735  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:30.319809  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:30.319883  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:30.347206  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:30.347230  203272 cri.go:89] found id: ""
	I1206 09:39:30.347239  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:30.347301  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:30.351650  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:30.351725  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:30.377348  203272 cri.go:89] found id: ""
	I1206 09:39:30.377373  203272 logs.go:282] 0 containers: []
	W1206 09:39:30.377383  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:30.377389  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:30.377469  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:30.404437  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:30.404460  203272 cri.go:89] found id: ""
	I1206 09:39:30.404469  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:30.404526  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:30.408208  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:30.408284  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:30.437236  203272 cri.go:89] found id: ""
	I1206 09:39:30.437266  203272 logs.go:282] 0 containers: []
	W1206 09:39:30.437276  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:30.437282  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:30.437362  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:30.466381  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:30.466413  203272 cri.go:89] found id: ""
	I1206 09:39:30.466422  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:30.466498  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:30.472645  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:30.472744  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:30.499897  203272 cri.go:89] found id: ""
	I1206 09:39:30.499921  203272 logs.go:282] 0 containers: []
	W1206 09:39:30.499930  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:30.499937  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:30.500027  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:30.525509  203272 cri.go:89] found id: ""
	I1206 09:39:30.525536  203272 logs.go:282] 0 containers: []
	W1206 09:39:30.525545  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:30.525575  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:30.525597  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:30.556062  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:30.556091  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:30.613663  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:30.613701  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:30.627259  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:30.627293  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:30.692980  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:30.693002  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:30.693015  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:30.727322  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:30.727353  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:30.780703  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:30.780744  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:30.833878  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:30.833909  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:30.867621  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:30.867658  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:33.414560  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:33.424885  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:33.424952  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:33.450330  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:33.450350  203272 cri.go:89] found id: ""
	I1206 09:39:33.450360  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:33.450424  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:33.454118  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:33.454189  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:33.480168  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:33.480190  203272 cri.go:89] found id: ""
	I1206 09:39:33.480199  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:33.480255  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:33.484255  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:33.484326  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:33.509067  203272 cri.go:89] found id: ""
	I1206 09:39:33.509092  203272 logs.go:282] 0 containers: []
	W1206 09:39:33.509101  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:33.509109  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:33.509188  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:33.534151  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:33.534171  203272 cri.go:89] found id: ""
	I1206 09:39:33.534180  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:33.534235  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:33.538018  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:33.538088  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:33.563544  203272 cri.go:89] found id: ""
	I1206 09:39:33.563569  203272 logs.go:282] 0 containers: []
	W1206 09:39:33.563579  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:33.563585  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:33.563645  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:33.593898  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:33.593922  203272 cri.go:89] found id: ""
	I1206 09:39:33.593931  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:33.594011  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:33.597847  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:33.597941  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:33.629351  203272 cri.go:89] found id: ""
	I1206 09:39:33.629376  203272 logs.go:282] 0 containers: []
	W1206 09:39:33.629386  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:33.629393  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:33.629481  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:33.656816  203272 cri.go:89] found id: ""
	I1206 09:39:33.656843  203272 logs.go:282] 0 containers: []
	W1206 09:39:33.656853  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:33.656873  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:33.656889  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:33.728533  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:33.728608  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:33.728636  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:33.763973  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:33.764048  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:33.810881  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:33.810916  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:33.841610  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:33.841643  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:33.871207  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:33.871232  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:33.931047  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:33.931081  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:33.966264  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:33.966294  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:33.998316  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:33.998349  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:36.514981  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:36.526798  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:36.526875  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:36.552985  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:36.553004  203272 cri.go:89] found id: ""
	I1206 09:39:36.553012  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:36.553066  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:36.556642  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:36.556711  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:36.582825  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:36.582848  203272 cri.go:89] found id: ""
	I1206 09:39:36.582856  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:36.582911  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:36.586548  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:36.586622  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:36.613346  203272 cri.go:89] found id: ""
	I1206 09:39:36.613378  203272 logs.go:282] 0 containers: []
	W1206 09:39:36.613387  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:36.613393  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:36.613448  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:36.638140  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:36.638163  203272 cri.go:89] found id: ""
	I1206 09:39:36.638171  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:36.638225  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:36.641916  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:36.641984  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:36.670816  203272 cri.go:89] found id: ""
	I1206 09:39:36.670849  203272 logs.go:282] 0 containers: []
	W1206 09:39:36.670858  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:36.670865  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:36.670945  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:36.698414  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:36.698435  203272 cri.go:89] found id: ""
	I1206 09:39:36.698443  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:36.698500  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:36.703263  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:36.703346  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:36.738519  203272 cri.go:89] found id: ""
	I1206 09:39:36.738546  203272 logs.go:282] 0 containers: []
	W1206 09:39:36.738555  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:36.738561  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:36.738623  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:36.765612  203272 cri.go:89] found id: ""
	I1206 09:39:36.765642  203272 logs.go:282] 0 containers: []
	W1206 09:39:36.765655  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:36.765668  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:36.765680  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:36.835835  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:36.835881  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:36.849078  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:36.849105  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:36.917773  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:36.917799  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:36.917816  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:36.952221  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:36.952253  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:36.981123  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:36.981159  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:37.014285  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:37.014317  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:37.066174  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:37.066205  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:37.101767  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:37.101810  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:39.638701  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:39.660276  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:39.660347  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:39.687265  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:39.687288  203272 cri.go:89] found id: ""
	I1206 09:39:39.687296  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:39.687352  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:39.690971  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:39.691047  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:39.717159  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:39.717183  203272 cri.go:89] found id: ""
	I1206 09:39:39.717192  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:39.717246  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:39.723929  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:39.724002  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:39.755646  203272 cri.go:89] found id: ""
	I1206 09:39:39.755670  203272 logs.go:282] 0 containers: []
	W1206 09:39:39.755679  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:39.755686  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:39.755741  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:39.797021  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:39.797045  203272 cri.go:89] found id: ""
	I1206 09:39:39.797053  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:39.797107  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:39.800636  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:39.800706  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:39.830026  203272 cri.go:89] found id: ""
	I1206 09:39:39.830052  203272 logs.go:282] 0 containers: []
	W1206 09:39:39.830061  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:39.830067  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:39.830132  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:39.861707  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:39.861786  203272 cri.go:89] found id: ""
	I1206 09:39:39.861817  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:39.861899  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:39.865716  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:39.865813  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:39.892150  203272 cri.go:89] found id: ""
	I1206 09:39:39.892176  203272 logs.go:282] 0 containers: []
	W1206 09:39:39.892185  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:39.892191  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:39.892249  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:39.917658  203272 cri.go:89] found id: ""
	I1206 09:39:39.917682  203272 logs.go:282] 0 containers: []
	W1206 09:39:39.917692  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:39.917705  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:39.917717  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:39.946474  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:39.946508  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:39.985006  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:39.985033  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:40.068679  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:40.068716  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:40.082904  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:40.082935  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:40.118126  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:40.118161  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:40.155503  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:40.155536  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:40.189266  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:40.189300  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:40.260240  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:40.260262  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:40.260276  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:42.797113  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:42.807632  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:42.807725  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:42.833491  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:42.833512  203272 cri.go:89] found id: ""
	I1206 09:39:42.833521  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:42.833582  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:42.837559  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:42.837634  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:42.869726  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:42.869751  203272 cri.go:89] found id: ""
	I1206 09:39:42.869759  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:42.869849  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:42.873646  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:42.873736  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:42.899698  203272 cri.go:89] found id: ""
	I1206 09:39:42.899764  203272 logs.go:282] 0 containers: []
	W1206 09:39:42.899779  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:42.899786  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:42.899844  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:42.925039  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:42.925062  203272 cri.go:89] found id: ""
	I1206 09:39:42.925070  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:42.925146  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:42.928745  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:42.928820  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:42.952923  203272 cri.go:89] found id: ""
	I1206 09:39:42.952948  203272 logs.go:282] 0 containers: []
	W1206 09:39:42.952957  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:42.952964  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:42.953080  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:42.978574  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:42.978596  203272 cri.go:89] found id: ""
	I1206 09:39:42.978604  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:42.978674  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:42.982436  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:42.982522  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:43.011043  203272 cri.go:89] found id: ""
	I1206 09:39:43.011069  203272 logs.go:282] 0 containers: []
	W1206 09:39:43.011078  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:43.011084  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:43.011166  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:43.038169  203272 cri.go:89] found id: ""
	I1206 09:39:43.038194  203272 logs.go:282] 0 containers: []
	W1206 09:39:43.038204  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:43.038217  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:43.038260  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:43.109282  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:43.109302  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:43.109320  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:43.144842  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:43.144878  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:43.206330  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:43.206366  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:43.219428  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:43.219464  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:43.257280  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:43.257309  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:43.297454  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:43.297493  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:43.329292  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:43.329325  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:43.358391  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:43.358422  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:45.889444  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:45.901070  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:45.901139  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:45.927788  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:45.927811  203272 cri.go:89] found id: ""
	I1206 09:39:45.927820  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:45.927904  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:45.932347  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:45.932418  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:45.963483  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:45.963516  203272 cri.go:89] found id: ""
	I1206 09:39:45.963525  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:45.963620  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:45.967694  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:45.967773  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:45.994599  203272 cri.go:89] found id: ""
	I1206 09:39:45.994628  203272 logs.go:282] 0 containers: []
	W1206 09:39:45.994637  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:45.994643  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:45.994702  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:46.026814  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:46.026851  203272 cri.go:89] found id: ""
	I1206 09:39:46.026860  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:46.026932  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:46.031047  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:46.031125  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:46.057574  203272 cri.go:89] found id: ""
	I1206 09:39:46.057600  203272 logs.go:282] 0 containers: []
	W1206 09:39:46.057609  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:46.057616  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:46.057676  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:46.083481  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:46.083554  203272 cri.go:89] found id: ""
	I1206 09:39:46.083569  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:46.083625  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:46.087809  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:46.087880  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:46.112959  203272 cri.go:89] found id: ""
	I1206 09:39:46.112981  203272 logs.go:282] 0 containers: []
	W1206 09:39:46.112990  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:46.112996  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:46.113061  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:46.138885  203272 cri.go:89] found id: ""
	I1206 09:39:46.138910  203272 logs.go:282] 0 containers: []
	W1206 09:39:46.138919  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:46.138931  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:46.138948  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:46.168589  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:46.168618  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:46.227777  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:46.227813  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:46.264906  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:46.264940  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:46.310546  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:46.310580  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:46.324341  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:46.324370  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:46.393335  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:46.393360  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:46.393374  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:46.425813  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:46.425850  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:46.462346  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:46.462379  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:48.995417  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:49.007009  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:49.007095  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:49.033080  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:49.033098  203272 cri.go:89] found id: ""
	I1206 09:39:49.033106  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:49.033172  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:49.037440  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:49.037518  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:49.067786  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:49.067807  203272 cri.go:89] found id: ""
	I1206 09:39:49.067815  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:49.067875  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:49.071709  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:49.071818  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:49.097662  203272 cri.go:89] found id: ""
	I1206 09:39:49.097687  203272 logs.go:282] 0 containers: []
	W1206 09:39:49.097696  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:49.097703  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:49.097762  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:49.126215  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:49.126238  203272 cri.go:89] found id: ""
	I1206 09:39:49.126247  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:49.126304  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:49.129826  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:49.129923  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:49.155402  203272 cri.go:89] found id: ""
	I1206 09:39:49.155427  203272 logs.go:282] 0 containers: []
	W1206 09:39:49.155436  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:49.155444  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:49.155508  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:49.179751  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:49.179774  203272 cri.go:89] found id: ""
	I1206 09:39:49.179782  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:49.179836  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:49.183452  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:49.183523  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:49.209797  203272 cri.go:89] found id: ""
	I1206 09:39:49.209875  203272 logs.go:282] 0 containers: []
	W1206 09:39:49.209900  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:49.209921  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:49.210012  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:49.237054  203272 cri.go:89] found id: ""
	I1206 09:39:49.237077  203272 logs.go:282] 0 containers: []
	W1206 09:39:49.237088  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:49.237101  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:49.237112  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:49.265746  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:49.265778  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:49.294531  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:49.294561  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:49.352593  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:49.352630  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:49.366009  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:49.366079  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:49.399778  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:49.399813  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:49.442357  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:49.442392  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:49.480951  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:49.480981  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:49.556064  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:49.556084  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:49.556097  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:52.090536  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:52.105308  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:52.105400  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:52.134311  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:52.134331  203272 cri.go:89] found id: ""
	I1206 09:39:52.134339  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:52.134403  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:52.138378  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:52.138503  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:52.170092  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:52.170111  203272 cri.go:89] found id: ""
	I1206 09:39:52.170126  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:52.170189  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:52.174007  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:52.174111  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:52.200718  203272 cri.go:89] found id: ""
	I1206 09:39:52.200740  203272 logs.go:282] 0 containers: []
	W1206 09:39:52.200749  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:52.200756  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:52.200814  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:52.230476  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:52.230558  203272 cri.go:89] found id: ""
	I1206 09:39:52.230580  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:52.230677  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:52.234614  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:52.234732  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:52.261418  203272 cri.go:89] found id: ""
	I1206 09:39:52.261442  203272 logs.go:282] 0 containers: []
	W1206 09:39:52.261451  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:52.261457  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:52.261514  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:52.286382  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:52.286406  203272 cri.go:89] found id: ""
	I1206 09:39:52.286415  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:52.286472  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:52.291106  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:52.291230  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:52.316032  203272 cri.go:89] found id: ""
	I1206 09:39:52.316057  203272 logs.go:282] 0 containers: []
	W1206 09:39:52.316077  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:52.316101  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:52.316175  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:52.345511  203272 cri.go:89] found id: ""
	I1206 09:39:52.345536  203272 logs.go:282] 0 containers: []
	W1206 09:39:52.345545  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:52.345558  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:52.345570  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:52.430087  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:52.430111  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:52.430124  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:52.474560  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:52.474593  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:52.569527  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:52.569556  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:52.649040  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:52.649077  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:52.668504  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:52.668532  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:52.716809  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:52.716845  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:52.750664  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:52.750748  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:52.785106  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:52.785142  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:55.317646  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:55.327930  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:55.328005  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:55.357845  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:55.357875  203272 cri.go:89] found id: ""
	I1206 09:39:55.357883  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:55.357939  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:55.361756  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:55.361830  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:55.387464  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:55.387484  203272 cri.go:89] found id: ""
	I1206 09:39:55.387493  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:55.387548  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:55.391124  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:55.391250  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:55.416996  203272 cri.go:89] found id: ""
	I1206 09:39:55.417017  203272 logs.go:282] 0 containers: []
	W1206 09:39:55.417026  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:55.417033  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:55.417092  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:55.443058  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:55.443080  203272 cri.go:89] found id: ""
	I1206 09:39:55.443089  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:55.443146  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:55.446921  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:55.447027  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:55.485685  203272 cri.go:89] found id: ""
	I1206 09:39:55.485708  203272 logs.go:282] 0 containers: []
	W1206 09:39:55.485717  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:55.485724  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:55.485780  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:55.521573  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:55.521598  203272 cri.go:89] found id: ""
	I1206 09:39:55.521607  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:55.521664  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:55.526112  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:55.526193  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:55.555625  203272 cri.go:89] found id: ""
	I1206 09:39:55.555649  203272 logs.go:282] 0 containers: []
	W1206 09:39:55.555657  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:55.555663  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:55.555727  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:55.581939  203272 cri.go:89] found id: ""
	I1206 09:39:55.581971  203272 logs.go:282] 0 containers: []
	W1206 09:39:55.581981  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:55.581997  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:55.582010  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:55.620749  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:55.620781  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:55.649688  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:55.649722  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:55.678378  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:55.678407  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:55.692079  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:55.692106  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:55.725201  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:55.725231  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:55.760874  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:55.760908  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:55.792943  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:55.792976  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:55.853390  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:55.853425  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:55.922750  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:58.422980  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:39:58.433222  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:39:58.433299  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:39:58.458180  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:58.458203  203272 cri.go:89] found id: ""
	I1206 09:39:58.458211  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:39:58.458268  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:58.461854  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:39:58.461932  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:39:58.496599  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:58.496626  203272 cri.go:89] found id: ""
	I1206 09:39:58.496635  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:39:58.496698  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:58.501405  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:39:58.501480  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:39:58.538270  203272 cri.go:89] found id: ""
	I1206 09:39:58.538348  203272 logs.go:282] 0 containers: []
	W1206 09:39:58.538372  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:39:58.538394  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:39:58.538468  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:39:58.566202  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:58.566266  203272 cri.go:89] found id: ""
	I1206 09:39:58.566288  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:39:58.566362  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:58.570253  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:39:58.570392  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:39:58.598415  203272 cri.go:89] found id: ""
	I1206 09:39:58.598481  203272 logs.go:282] 0 containers: []
	W1206 09:39:58.598504  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:39:58.598523  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:39:58.598596  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:39:58.625855  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:39:58.625935  203272 cri.go:89] found id: ""
	I1206 09:39:58.625958  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:39:58.626031  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:39:58.629930  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:39:58.630046  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:39:58.658587  203272 cri.go:89] found id: ""
	I1206 09:39:58.658613  203272 logs.go:282] 0 containers: []
	W1206 09:39:58.658622  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:39:58.658628  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:39:58.658736  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:39:58.684130  203272 cri.go:89] found id: ""
	I1206 09:39:58.684154  203272 logs.go:282] 0 containers: []
	W1206 09:39:58.684163  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:39:58.684194  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:39:58.684212  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:39:58.696986  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:39:58.697012  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:39:58.762760  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:39:58.762780  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:39:58.762798  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:39:58.796556  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:39:58.796587  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:39:58.826294  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:39:58.826329  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:39:58.872519  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:39:58.872547  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:39:58.935080  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:39:58.935116  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:39:58.969625  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:39:58.969657  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:39:59.004454  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:39:59.004493  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:01.543807  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:01.555109  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:01.555192  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:01.583605  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:01.583675  203272 cri.go:89] found id: ""
	I1206 09:40:01.583696  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:01.583786  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:01.587906  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:01.588001  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:01.616351  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:01.616376  203272 cri.go:89] found id: ""
	I1206 09:40:01.616385  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:01.616455  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:01.620440  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:01.620519  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:01.647627  203272 cri.go:89] found id: ""
	I1206 09:40:01.647657  203272 logs.go:282] 0 containers: []
	W1206 09:40:01.647666  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:01.647673  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:01.647730  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:01.674937  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:01.674962  203272 cri.go:89] found id: ""
	I1206 09:40:01.674972  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:01.675030  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:01.679031  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:01.679152  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:01.704672  203272 cri.go:89] found id: ""
	I1206 09:40:01.704696  203272 logs.go:282] 0 containers: []
	W1206 09:40:01.704706  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:01.704712  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:01.704794  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:01.735680  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:01.735753  203272 cri.go:89] found id: ""
	I1206 09:40:01.735768  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:01.735827  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:01.739752  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:01.739861  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:01.765172  203272 cri.go:89] found id: ""
	I1206 09:40:01.765198  203272 logs.go:282] 0 containers: []
	W1206 09:40:01.765208  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:01.765215  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:01.765294  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:01.797154  203272 cri.go:89] found id: ""
	I1206 09:40:01.797354  203272 logs.go:282] 0 containers: []
	W1206 09:40:01.797386  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:01.797407  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:01.797455  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:01.811313  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:01.811344  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:01.883633  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:01.883718  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:01.921137  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:01.921172  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:01.955693  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:01.955742  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:01.985243  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:01.985282  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:02.028155  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:02.028189  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:02.087161  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:02.087198  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:02.158974  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:02.158994  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:02.159006  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:04.694876  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:04.705767  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:04.705872  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:04.732126  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:04.732150  203272 cri.go:89] found id: ""
	I1206 09:40:04.732158  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:04.732260  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:04.736058  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:04.736131  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:04.767642  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:04.767716  203272 cri.go:89] found id: ""
	I1206 09:40:04.767740  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:04.767830  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:04.772307  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:04.772445  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:04.802708  203272 cri.go:89] found id: ""
	I1206 09:40:04.802735  203272 logs.go:282] 0 containers: []
	W1206 09:40:04.802746  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:04.802755  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:04.802821  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:04.828397  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:04.828422  203272 cri.go:89] found id: ""
	I1206 09:40:04.828430  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:04.828490  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:04.832164  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:04.832288  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:04.864363  203272 cri.go:89] found id: ""
	I1206 09:40:04.864403  203272 logs.go:282] 0 containers: []
	W1206 09:40:04.864412  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:04.864419  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:04.864500  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:04.891516  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:04.891540  203272 cri.go:89] found id: ""
	I1206 09:40:04.891548  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:04.891636  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:04.895780  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:04.895875  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:04.922104  203272 cri.go:89] found id: ""
	I1206 09:40:04.922128  203272 logs.go:282] 0 containers: []
	W1206 09:40:04.922137  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:04.922144  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:04.922221  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:04.947456  203272 cri.go:89] found id: ""
	I1206 09:40:04.947486  203272 logs.go:282] 0 containers: []
	W1206 09:40:04.947495  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:04.947527  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:04.947547  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:05.026251  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:05.026320  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:05.026347  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:05.063573  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:05.063605  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:05.101887  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:05.101928  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:05.145577  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:05.145609  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:05.205052  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:05.205089  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:05.219319  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:05.219426  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:05.273843  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:05.274404  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:05.321811  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:05.321843  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:07.852016  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:07.863059  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:07.863127  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:07.888702  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:07.888722  203272 cri.go:89] found id: ""
	I1206 09:40:07.888731  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:07.888787  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:07.892581  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:07.892650  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:07.926356  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:07.926383  203272 cri.go:89] found id: ""
	I1206 09:40:07.926392  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:07.926450  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:07.930194  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:07.930271  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:07.956358  203272 cri.go:89] found id: ""
	I1206 09:40:07.956383  203272 logs.go:282] 0 containers: []
	W1206 09:40:07.956392  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:07.956402  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:07.956461  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:07.984852  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:07.984878  203272 cri.go:89] found id: ""
	I1206 09:40:07.984886  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:07.984992  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:07.988755  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:07.988883  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:08.020853  203272 cri.go:89] found id: ""
	I1206 09:40:08.020931  203272 logs.go:282] 0 containers: []
	W1206 09:40:08.020955  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:08.020970  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:08.021049  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:08.048739  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:08.048769  203272 cri.go:89] found id: ""
	I1206 09:40:08.048779  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:08.048847  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:08.052764  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:08.052843  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:08.081487  203272 cri.go:89] found id: ""
	I1206 09:40:08.081554  203272 logs.go:282] 0 containers: []
	W1206 09:40:08.081570  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:08.081577  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:08.081637  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:08.111996  203272 cri.go:89] found id: ""
	I1206 09:40:08.112019  203272 logs.go:282] 0 containers: []
	W1206 09:40:08.112029  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:08.112043  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:08.112061  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:08.151629  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:08.151666  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:08.183687  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:08.183720  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:08.213731  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:08.213779  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:08.244293  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:08.244322  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:08.259161  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:08.259199  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:08.307562  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:08.307601  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:08.367807  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:08.367843  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:08.430512  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:08.430533  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:08.430546  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:10.963335  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:10.973887  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:10.973967  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:11.001183  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:11.001206  203272 cri.go:89] found id: ""
	I1206 09:40:11.001215  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:11.001285  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:11.007045  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:11.007134  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:11.039363  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:11.039472  203272 cri.go:89] found id: ""
	I1206 09:40:11.039496  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:11.039618  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:11.043516  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:11.043585  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:11.073271  203272 cri.go:89] found id: ""
	I1206 09:40:11.073295  203272 logs.go:282] 0 containers: []
	W1206 09:40:11.073304  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:11.073310  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:11.073370  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:11.101465  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:11.101485  203272 cri.go:89] found id: ""
	I1206 09:40:11.101493  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:11.101556  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:11.105605  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:11.105681  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:11.134214  203272 cri.go:89] found id: ""
	I1206 09:40:11.134249  203272 logs.go:282] 0 containers: []
	W1206 09:40:11.134259  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:11.134266  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:11.134343  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:11.166987  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:11.167011  203272 cri.go:89] found id: ""
	I1206 09:40:11.167020  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:11.167090  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:11.171479  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:11.171572  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:11.198421  203272 cri.go:89] found id: ""
	I1206 09:40:11.198457  203272 logs.go:282] 0 containers: []
	W1206 09:40:11.198467  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:11.198474  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:11.198545  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:11.234037  203272 cri.go:89] found id: ""
	I1206 09:40:11.234129  203272 logs.go:282] 0 containers: []
	W1206 09:40:11.234153  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:11.234185  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:11.234226  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:11.303617  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:11.303636  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:11.303649  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:11.339980  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:11.340016  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:11.373205  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:11.373240  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:11.408864  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:11.408900  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:11.441301  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:11.441335  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:11.507178  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:11.507215  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:11.523823  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:11.523893  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:11.554485  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:11.554525  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:14.096101  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:14.109762  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:14.109845  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:14.135858  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:14.135937  203272 cri.go:89] found id: ""
	I1206 09:40:14.135946  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:14.136077  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:14.140172  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:14.140247  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:14.165767  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:14.165793  203272 cri.go:89] found id: ""
	I1206 09:40:14.165802  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:14.165857  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:14.169538  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:14.169623  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:14.201944  203272 cri.go:89] found id: ""
	I1206 09:40:14.201968  203272 logs.go:282] 0 containers: []
	W1206 09:40:14.201977  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:14.201983  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:14.202038  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:14.230189  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:14.230267  203272 cri.go:89] found id: ""
	I1206 09:40:14.230288  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:14.230390  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:14.237218  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:14.237293  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:14.269489  203272 cri.go:89] found id: ""
	I1206 09:40:14.269511  203272 logs.go:282] 0 containers: []
	W1206 09:40:14.269520  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:14.269526  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:14.269583  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:14.303085  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:14.303104  203272 cri.go:89] found id: ""
	I1206 09:40:14.303112  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:14.303175  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:14.306994  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:14.307077  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:14.332341  203272 cri.go:89] found id: ""
	I1206 09:40:14.332414  203272 logs.go:282] 0 containers: []
	W1206 09:40:14.332438  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:14.332456  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:14.332547  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:14.358140  203272 cri.go:89] found id: ""
	I1206 09:40:14.358167  203272 logs.go:282] 0 containers: []
	W1206 09:40:14.358177  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:14.358191  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:14.358204  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:14.371262  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:14.371292  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:14.439346  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:14.439371  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:14.439413  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:14.477661  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:14.477695  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:14.515067  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:14.515100  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:14.545936  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:14.545964  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:14.575175  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:14.575207  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:14.635760  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:14.635795  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:14.675157  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:14.675193  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:17.204236  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:17.214583  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:17.214707  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:17.243759  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:17.243842  203272 cri.go:89] found id: ""
	I1206 09:40:17.243870  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:17.243986  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:17.248841  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:17.248921  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:17.279715  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:17.279793  203272 cri.go:89] found id: ""
	I1206 09:40:17.279815  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:17.279910  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:17.284963  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:17.285092  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:17.315558  203272 cri.go:89] found id: ""
	I1206 09:40:17.315584  203272 logs.go:282] 0 containers: []
	W1206 09:40:17.315593  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:17.315600  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:17.315663  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:17.340845  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:17.340922  203272 cri.go:89] found id: ""
	I1206 09:40:17.340944  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:17.341022  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:17.344891  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:17.344958  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:17.370736  203272 cri.go:89] found id: ""
	I1206 09:40:17.370762  203272 logs.go:282] 0 containers: []
	W1206 09:40:17.370780  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:17.370787  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:17.370846  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:17.396490  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:17.396514  203272 cri.go:89] found id: ""
	I1206 09:40:17.396522  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:17.396629  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:17.400488  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:17.400558  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:17.425821  203272 cri.go:89] found id: ""
	I1206 09:40:17.425896  203272 logs.go:282] 0 containers: []
	W1206 09:40:17.425919  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:17.425938  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:17.426056  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:17.451003  203272 cri.go:89] found id: ""
	I1206 09:40:17.451025  203272 logs.go:282] 0 containers: []
	W1206 09:40:17.451035  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:17.451052  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:17.451065  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:17.488410  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:17.488446  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:17.517964  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:17.518001  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:17.580622  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:17.580655  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:17.621646  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:17.621678  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:17.661123  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:17.661157  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:17.694349  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:17.694432  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:17.723402  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:17.723430  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:17.737164  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:17.737191  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:17.807321  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:20.307586  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:20.319767  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:20.319847  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:20.349965  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:20.349988  203272 cri.go:89] found id: ""
	I1206 09:40:20.349996  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:20.350058  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:20.353985  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:20.354069  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:20.384332  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:20.384355  203272 cri.go:89] found id: ""
	I1206 09:40:20.384363  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:20.384417  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:20.388092  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:20.388168  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:20.416651  203272 cri.go:89] found id: ""
	I1206 09:40:20.416683  203272 logs.go:282] 0 containers: []
	W1206 09:40:20.416693  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:20.416700  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:20.416759  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:20.445321  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:20.445344  203272 cri.go:89] found id: ""
	I1206 09:40:20.445352  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:20.445407  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:20.449002  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:20.449075  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:20.481992  203272 cri.go:89] found id: ""
	I1206 09:40:20.482018  203272 logs.go:282] 0 containers: []
	W1206 09:40:20.482028  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:20.482034  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:20.482093  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:20.511676  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:20.511697  203272 cri.go:89] found id: ""
	I1206 09:40:20.511705  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:20.511758  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:20.515326  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:20.515449  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:20.540279  203272 cri.go:89] found id: ""
	I1206 09:40:20.540318  203272 logs.go:282] 0 containers: []
	W1206 09:40:20.540328  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:20.540336  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:20.540411  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:20.569385  203272 cri.go:89] found id: ""
	I1206 09:40:20.569455  203272 logs.go:282] 0 containers: []
	W1206 09:40:20.569480  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:20.569506  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:20.569535  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:20.626238  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:20.626272  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:20.639226  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:20.639253  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:20.707470  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:20.707508  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:20.707522  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:20.741962  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:20.741995  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:20.780440  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:20.780514  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:20.812270  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:20.812300  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:20.841030  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:20.841068  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:20.876277  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:20.876310  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:23.403791  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:23.414882  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:23.414984  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:23.452740  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:23.452763  203272 cri.go:89] found id: ""
	I1206 09:40:23.452771  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:23.452840  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:23.457204  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:23.457288  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:23.486559  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:23.486578  203272 cri.go:89] found id: ""
	I1206 09:40:23.486594  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:23.486649  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:23.493234  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:23.493316  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:23.525959  203272 cri.go:89] found id: ""
	I1206 09:40:23.526012  203272 logs.go:282] 0 containers: []
	W1206 09:40:23.526022  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:23.526029  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:23.526099  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:23.571951  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:23.571973  203272 cri.go:89] found id: ""
	I1206 09:40:23.571996  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:23.572061  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:23.577211  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:23.577300  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:23.613740  203272 cri.go:89] found id: ""
	I1206 09:40:23.613777  203272 logs.go:282] 0 containers: []
	W1206 09:40:23.613787  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:23.613794  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:23.613858  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:23.648065  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:23.648152  203272 cri.go:89] found id: ""
	I1206 09:40:23.648174  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:23.648262  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:23.652576  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:23.652699  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:23.691891  203272 cri.go:89] found id: ""
	I1206 09:40:23.691975  203272 logs.go:282] 0 containers: []
	W1206 09:40:23.691999  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:23.692019  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:23.692142  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:23.721481  203272 cri.go:89] found id: ""
	I1206 09:40:23.721503  203272 logs.go:282] 0 containers: []
	W1206 09:40:23.721512  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:23.721525  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:23.721536  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:23.737311  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:23.737335  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:23.778549  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:23.778633  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:23.811191  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:23.811269  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:23.861342  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:23.861371  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:23.931099  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:23.931134  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:24.076775  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:24.076801  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:24.076820  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:24.126786  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:24.126821  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:24.171673  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:24.171706  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:26.720287  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:26.733172  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:26.733247  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:26.758926  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:26.758948  203272 cri.go:89] found id: ""
	I1206 09:40:26.758957  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:26.759017  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:26.762616  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:26.762688  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:26.788337  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:26.788359  203272 cri.go:89] found id: ""
	I1206 09:40:26.788367  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:26.788421  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:26.792124  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:26.792201  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:26.816876  203272 cri.go:89] found id: ""
	I1206 09:40:26.816957  203272 logs.go:282] 0 containers: []
	W1206 09:40:26.816972  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:26.816980  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:26.817037  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:26.843282  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:26.843336  203272 cri.go:89] found id: ""
	I1206 09:40:26.843345  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:26.843426  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:26.847047  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:26.847116  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:26.879357  203272 cri.go:89] found id: ""
	I1206 09:40:26.879405  203272 logs.go:282] 0 containers: []
	W1206 09:40:26.879415  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:26.879421  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:26.879480  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:26.910392  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:26.910415  203272 cri.go:89] found id: ""
	I1206 09:40:26.910423  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:26.910476  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:26.914102  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:26.914179  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:26.941960  203272 cri.go:89] found id: ""
	I1206 09:40:26.941996  203272 logs.go:282] 0 containers: []
	W1206 09:40:26.942005  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:26.942012  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:26.942070  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:26.966953  203272 cri.go:89] found id: ""
	I1206 09:40:26.966979  203272 logs.go:282] 0 containers: []
	W1206 09:40:26.966988  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:26.967001  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:26.967013  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:27.010188  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:27.010227  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:27.024450  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:27.024475  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:27.099026  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:27.099045  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:27.099057  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:27.134131  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:27.134166  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:27.163321  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:27.163360  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:27.195609  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:27.195636  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:27.253383  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:27.253419  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:27.293955  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:27.293990  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:29.831220  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:29.841855  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:29.841946  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:29.868889  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:29.868911  203272 cri.go:89] found id: ""
	I1206 09:40:29.868919  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:29.868974  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:29.872698  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:29.872808  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:29.900096  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:29.900119  203272 cri.go:89] found id: ""
	I1206 09:40:29.900128  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:29.900183  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:29.903875  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:29.903946  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:29.928945  203272 cri.go:89] found id: ""
	I1206 09:40:29.928970  203272 logs.go:282] 0 containers: []
	W1206 09:40:29.928979  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:29.928986  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:29.929044  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:29.959108  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:29.959140  203272 cri.go:89] found id: ""
	I1206 09:40:29.959149  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:29.959230  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:29.963097  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:29.963176  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:30.005427  203272 cri.go:89] found id: ""
	I1206 09:40:30.005453  203272 logs.go:282] 0 containers: []
	W1206 09:40:30.005462  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:30.005469  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:30.005543  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:30.142966  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:30.142987  203272 cri.go:89] found id: ""
	I1206 09:40:30.142995  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:30.143057  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:30.148031  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:30.148113  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:30.180538  203272 cri.go:89] found id: ""
	I1206 09:40:30.180609  203272 logs.go:282] 0 containers: []
	W1206 09:40:30.180639  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:30.180659  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:30.180745  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:30.216319  203272 cri.go:89] found id: ""
	I1206 09:40:30.216390  203272 logs.go:282] 0 containers: []
	W1206 09:40:30.216417  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:30.216446  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:30.216475  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:30.247588  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:30.247626  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:30.318467  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:30.318531  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:30.318553  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:30.357099  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:30.357131  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:30.389569  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:30.389598  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:30.422267  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:30.422299  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:30.488202  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:30.488248  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:30.502155  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:30.502184  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:30.535489  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:30.535522  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:33.069976  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:33.080714  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:33.080784  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:33.107457  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:33.107481  203272 cri.go:89] found id: ""
	I1206 09:40:33.107491  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:33.107563  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:33.111268  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:33.111347  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:33.142458  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:33.142478  203272 cri.go:89] found id: ""
	I1206 09:40:33.142486  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:33.142542  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:33.146424  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:33.146493  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:33.172918  203272 cri.go:89] found id: ""
	I1206 09:40:33.172943  203272 logs.go:282] 0 containers: []
	W1206 09:40:33.172952  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:33.172959  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:33.173023  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:33.200828  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:33.200850  203272 cri.go:89] found id: ""
	I1206 09:40:33.200858  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:33.200916  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:33.204632  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:33.204707  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:33.230105  203272 cri.go:89] found id: ""
	I1206 09:40:33.230129  203272 logs.go:282] 0 containers: []
	W1206 09:40:33.230139  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:33.230145  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:33.230207  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:33.260980  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:33.261015  203272 cri.go:89] found id: ""
	I1206 09:40:33.261025  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:33.261125  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:33.265001  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:33.265077  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:33.293160  203272 cri.go:89] found id: ""
	I1206 09:40:33.293188  203272 logs.go:282] 0 containers: []
	W1206 09:40:33.293197  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:33.293204  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:33.293263  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:33.319326  203272 cri.go:89] found id: ""
	I1206 09:40:33.319354  203272 logs.go:282] 0 containers: []
	W1206 09:40:33.319363  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:33.319406  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:33.319419  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:33.371842  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:33.371881  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:33.412372  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:33.412406  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:33.449347  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:33.449383  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:33.518184  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:33.518209  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:33.518224  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:33.552452  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:33.552483  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:33.582624  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:33.582661  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:33.611187  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:33.611217  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:33.672741  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:33.672781  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:36.187328  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:36.198433  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:36.198510  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:36.224518  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:36.224540  203272 cri.go:89] found id: ""
	I1206 09:40:36.224549  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:36.224605  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:36.228380  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:36.228453  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:36.258228  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:36.258251  203272 cri.go:89] found id: ""
	I1206 09:40:36.258259  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:36.258315  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:36.262132  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:36.262204  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:36.289333  203272 cri.go:89] found id: ""
	I1206 09:40:36.289359  203272 logs.go:282] 0 containers: []
	W1206 09:40:36.289368  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:36.289377  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:36.289437  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:36.319936  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:36.320000  203272 cri.go:89] found id: ""
	I1206 09:40:36.320015  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:36.320072  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:36.323994  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:36.324069  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:36.352854  203272 cri.go:89] found id: ""
	I1206 09:40:36.352879  203272 logs.go:282] 0 containers: []
	W1206 09:40:36.352889  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:36.352896  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:36.352957  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:36.384141  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:36.384165  203272 cri.go:89] found id: ""
	I1206 09:40:36.384173  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:36.384229  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:36.388299  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:36.388373  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:36.418006  203272 cri.go:89] found id: ""
	I1206 09:40:36.418044  203272 logs.go:282] 0 containers: []
	W1206 09:40:36.418052  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:36.418059  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:36.418121  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:36.443413  203272 cri.go:89] found id: ""
	I1206 09:40:36.443439  203272 logs.go:282] 0 containers: []
	W1206 09:40:36.443448  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:36.443461  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:36.443473  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:36.483397  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:36.483426  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:36.514085  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:36.514148  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:36.545188  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:36.545216  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:36.604234  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:36.604269  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:36.618333  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:36.618361  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:36.656179  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:36.656212  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:36.753328  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:36.753353  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:36.753366  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:36.790950  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:36.791023  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:39.325619  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:39.338465  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:39.338578  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:39.364050  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:39.364074  203272 cri.go:89] found id: ""
	I1206 09:40:39.364082  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:39.364138  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:39.367962  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:39.368053  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:39.393277  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:39.393300  203272 cri.go:89] found id: ""
	I1206 09:40:39.393309  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:39.393364  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:39.396996  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:39.397084  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:39.423543  203272 cri.go:89] found id: ""
	I1206 09:40:39.423569  203272 logs.go:282] 0 containers: []
	W1206 09:40:39.423578  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:39.423585  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:39.423645  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:39.449817  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:39.449840  203272 cri.go:89] found id: ""
	I1206 09:40:39.449849  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:39.449925  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:39.453667  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:39.453764  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:39.478751  203272 cri.go:89] found id: ""
	I1206 09:40:39.478777  203272 logs.go:282] 0 containers: []
	W1206 09:40:39.478786  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:39.478792  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:39.478868  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:39.503799  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:39.503823  203272 cri.go:89] found id: ""
	I1206 09:40:39.503831  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:39.503911  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:39.507826  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:39.507939  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:39.535012  203272 cri.go:89] found id: ""
	I1206 09:40:39.535039  203272 logs.go:282] 0 containers: []
	W1206 09:40:39.535059  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:39.535084  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:39.535167  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:39.560824  203272 cri.go:89] found id: ""
	I1206 09:40:39.560853  203272 logs.go:282] 0 containers: []
	W1206 09:40:39.560862  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:39.560875  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:39.560886  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:39.619543  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:39.619580  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:39.692517  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:39.692538  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:39.692551  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:39.747401  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:39.747433  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:39.841469  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:39.841505  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:39.894795  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:39.894828  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:39.938430  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:39.938462  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:39.972956  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:39.972988  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:40.030384  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:40.030415  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:42.548189  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:42.558643  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:42.558718  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:42.584963  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:42.584985  203272 cri.go:89] found id: ""
	I1206 09:40:42.584994  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:42.585056  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:42.588890  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:42.588963  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:42.618148  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:42.618173  203272 cri.go:89] found id: ""
	I1206 09:40:42.618183  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:42.618241  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:42.622153  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:42.622230  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:42.649058  203272 cri.go:89] found id: ""
	I1206 09:40:42.649083  203272 logs.go:282] 0 containers: []
	W1206 09:40:42.649092  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:42.649098  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:42.649158  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:42.675832  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:42.675856  203272 cri.go:89] found id: ""
	I1206 09:40:42.675870  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:42.675927  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:42.679886  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:42.679961  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:42.707175  203272 cri.go:89] found id: ""
	I1206 09:40:42.707201  203272 logs.go:282] 0 containers: []
	W1206 09:40:42.707210  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:42.707216  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:42.707275  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:42.752766  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:42.752790  203272 cri.go:89] found id: ""
	I1206 09:40:42.752798  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:42.752853  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:42.757171  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:42.757246  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:42.790145  203272 cri.go:89] found id: ""
	I1206 09:40:42.790170  203272 logs.go:282] 0 containers: []
	W1206 09:40:42.790178  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:42.790185  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:42.790243  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:42.821277  203272 cri.go:89] found id: ""
	I1206 09:40:42.821301  203272 logs.go:282] 0 containers: []
	W1206 09:40:42.821319  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:42.821335  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:42.821350  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:42.888469  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:42.888491  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:42.888503  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:42.930393  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:42.930426  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:42.963603  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:42.963634  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:42.995944  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:42.995977  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:43.057953  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:43.057987  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:43.071350  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:43.071405  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:43.108802  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:43.108835  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:43.139576  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:43.139616  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:45.669642  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:45.680196  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:45.680267  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:45.704939  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:45.704960  203272 cri.go:89] found id: ""
	I1206 09:40:45.704968  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:45.705023  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:45.708866  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:45.708984  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:45.744856  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:45.744880  203272 cri.go:89] found id: ""
	I1206 09:40:45.744888  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:45.744977  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:45.749513  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:45.749614  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:45.789180  203272 cri.go:89] found id: ""
	I1206 09:40:45.789204  203272 logs.go:282] 0 containers: []
	W1206 09:40:45.789213  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:45.789220  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:45.789306  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:45.814833  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:45.814856  203272 cri.go:89] found id: ""
	I1206 09:40:45.814865  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:45.814998  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:45.819725  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:45.819829  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:45.852240  203272 cri.go:89] found id: ""
	I1206 09:40:45.852267  203272 logs.go:282] 0 containers: []
	W1206 09:40:45.852277  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:45.852284  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:45.852401  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:45.881991  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:45.882014  203272 cri.go:89] found id: ""
	I1206 09:40:45.882023  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:45.882130  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:45.886250  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:45.886357  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:45.913176  203272 cri.go:89] found id: ""
	I1206 09:40:45.913204  203272 logs.go:282] 0 containers: []
	W1206 09:40:45.913213  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:45.913220  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:45.913305  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:45.939320  203272 cri.go:89] found id: ""
	I1206 09:40:45.939347  203272 logs.go:282] 0 containers: []
	W1206 09:40:45.939360  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:45.939425  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:45.939442  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:45.968978  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:45.969011  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:46.032161  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:46.032199  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:46.046439  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:46.046468  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:46.120393  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:46.120459  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:46.120477  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:46.156575  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:46.156614  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:46.186061  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:46.186089  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:46.222070  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:46.222103  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:46.265781  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:46.265814  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:48.815596  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:48.826213  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:48.826286  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:48.854127  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:48.854150  203272 cri.go:89] found id: ""
	I1206 09:40:48.854158  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:48.854219  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:48.857843  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:48.857914  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:48.889853  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:48.889890  203272 cri.go:89] found id: ""
	I1206 09:40:48.889898  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:48.889953  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:48.894466  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:48.894542  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:48.919112  203272 cri.go:89] found id: ""
	I1206 09:40:48.919140  203272 logs.go:282] 0 containers: []
	W1206 09:40:48.919149  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:48.919155  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:48.919215  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:48.944346  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:48.944369  203272 cri.go:89] found id: ""
	I1206 09:40:48.944380  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:48.944436  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:48.948357  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:48.948439  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:48.973917  203272 cri.go:89] found id: ""
	I1206 09:40:48.973943  203272 logs.go:282] 0 containers: []
	W1206 09:40:48.973952  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:48.973958  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:48.974013  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:49.002039  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:49.002073  203272 cri.go:89] found id: ""
	I1206 09:40:49.002082  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:49.002157  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:49.006920  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:49.006994  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:49.032980  203272 cri.go:89] found id: ""
	I1206 09:40:49.033007  203272 logs.go:282] 0 containers: []
	W1206 09:40:49.033016  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:49.033023  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:49.033130  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:49.060655  203272 cri.go:89] found id: ""
	I1206 09:40:49.060680  203272 logs.go:282] 0 containers: []
	W1206 09:40:49.060688  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:49.060731  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:49.060752  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:49.073630  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:49.073699  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:49.113412  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:49.113450  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:49.158384  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:49.158420  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:49.218619  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:49.218653  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:49.283254  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:49.283273  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:49.283286  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:49.316825  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:49.316862  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:49.352052  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:49.352087  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:49.383484  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:49.383515  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:51.921415  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:51.932520  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:51.932590  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:51.958806  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:51.958825  203272 cri.go:89] found id: ""
	I1206 09:40:51.958839  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:51.958894  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:51.962768  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:51.962846  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:51.989009  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:51.989030  203272 cri.go:89] found id: ""
	I1206 09:40:51.989038  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:51.989097  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:51.993133  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:51.993210  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:52.020736  203272 cri.go:89] found id: ""
	I1206 09:40:52.020759  203272 logs.go:282] 0 containers: []
	W1206 09:40:52.020768  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:52.020775  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:52.020839  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:52.048629  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:52.048653  203272 cri.go:89] found id: ""
	I1206 09:40:52.048661  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:52.048719  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:52.052581  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:52.052700  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:52.078028  203272 cri.go:89] found id: ""
	I1206 09:40:52.078055  203272 logs.go:282] 0 containers: []
	W1206 09:40:52.078114  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:52.078126  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:52.078187  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:52.104901  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:52.104966  203272 cri.go:89] found id: ""
	I1206 09:40:52.104987  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:52.105074  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:52.108997  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:52.109087  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:52.140645  203272 cri.go:89] found id: ""
	I1206 09:40:52.140671  203272 logs.go:282] 0 containers: []
	W1206 09:40:52.140683  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:52.140689  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:52.140758  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:52.171317  203272 cri.go:89] found id: ""
	I1206 09:40:52.171418  203272 logs.go:282] 0 containers: []
	W1206 09:40:52.171446  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:52.171468  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:52.171494  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:52.229217  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:52.229250  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:52.242963  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:52.242992  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:52.317727  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:52.317794  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:52.317823  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:52.351117  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:52.351150  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:52.393110  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:52.393185  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:52.436115  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:52.436241  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:52.471591  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:52.471623  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:52.523711  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:52.523747  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:55.056359  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:55.068670  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:55.068748  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:55.106509  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:55.106534  203272 cri.go:89] found id: ""
	I1206 09:40:55.106543  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:55.106618  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:55.111238  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:55.111332  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:55.152943  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:55.152973  203272 cri.go:89] found id: ""
	I1206 09:40:55.152985  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:55.153069  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:55.157819  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:55.157982  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:55.185489  203272 cri.go:89] found id: ""
	I1206 09:40:55.185517  203272 logs.go:282] 0 containers: []
	W1206 09:40:55.185526  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:55.185533  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:55.185668  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:55.215124  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:55.215148  203272 cri.go:89] found id: ""
	I1206 09:40:55.215158  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:55.215214  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:55.219033  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:55.219126  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:55.248405  203272 cri.go:89] found id: ""
	I1206 09:40:55.248434  203272 logs.go:282] 0 containers: []
	W1206 09:40:55.248443  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:55.248450  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:55.248511  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:55.275210  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:55.275238  203272 cri.go:89] found id: ""
	I1206 09:40:55.275246  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:55.275302  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:55.279069  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:55.279192  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:55.306243  203272 cri.go:89] found id: ""
	I1206 09:40:55.306270  203272 logs.go:282] 0 containers: []
	W1206 09:40:55.306280  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:55.306286  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:55.306348  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:55.331927  203272 cri.go:89] found id: ""
	I1206 09:40:55.331956  203272 logs.go:282] 0 containers: []
	W1206 09:40:55.331966  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:55.331986  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:55.331997  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:55.345751  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:55.345779  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:55.383739  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:55.383773  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:55.415236  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:55.415267  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:55.450789  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:55.450827  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:55.501894  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:55.501935  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:55.544626  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:55.544660  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:55.603196  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:55.603280  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:55.672764  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:55.672787  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:55.672800  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:58.205183  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:40:58.215200  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:40:58.215274  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:40:58.241127  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:40:58.241149  203272 cri.go:89] found id: ""
	I1206 09:40:58.241157  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:40:58.241216  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:58.245175  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:40:58.245249  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:40:58.270955  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:58.270979  203272 cri.go:89] found id: ""
	I1206 09:40:58.270989  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:40:58.271043  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:58.274804  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:40:58.274880  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:40:58.301944  203272 cri.go:89] found id: ""
	I1206 09:40:58.301969  203272 logs.go:282] 0 containers: []
	W1206 09:40:58.301978  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:40:58.301984  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:40:58.302045  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:40:58.326828  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:58.326852  203272 cri.go:89] found id: ""
	I1206 09:40:58.326860  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:40:58.326920  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:58.330801  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:40:58.330875  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:40:58.357277  203272 cri.go:89] found id: ""
	I1206 09:40:58.357302  203272 logs.go:282] 0 containers: []
	W1206 09:40:58.357311  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:40:58.357317  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:40:58.357377  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:40:58.383251  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:58.383316  203272 cri.go:89] found id: ""
	I1206 09:40:58.383337  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:40:58.383440  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:40:58.387144  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:40:58.387219  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:40:58.411559  203272 cri.go:89] found id: ""
	I1206 09:40:58.411584  203272 logs.go:282] 0 containers: []
	W1206 09:40:58.411593  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:40:58.411599  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:40:58.411662  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:40:58.436887  203272 cri.go:89] found id: ""
	I1206 09:40:58.436911  203272 logs.go:282] 0 containers: []
	W1206 09:40:58.436920  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:40:58.436942  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:40:58.436954  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:40:58.469113  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:40:58.469142  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:40:58.504943  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:40:58.504984  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:40:58.558996  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:40:58.559031  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:40:58.595806  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:40:58.595846  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:40:58.625718  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:40:58.625747  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:40:58.684158  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:40:58.684193  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:40:58.697824  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:40:58.697860  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:40:58.765928  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:40:58.765949  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:40:58.765963  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:01.301218  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:01.312881  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:01.312961  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:01.345680  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:01.345717  203272 cri.go:89] found id: ""
	I1206 09:41:01.345726  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:01.345796  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:01.351251  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:01.351342  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:01.411758  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:01.411797  203272 cri.go:89] found id: ""
	I1206 09:41:01.411806  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:01.411872  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:01.416423  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:01.416504  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:01.461208  203272 cri.go:89] found id: ""
	I1206 09:41:01.461234  203272 logs.go:282] 0 containers: []
	W1206 09:41:01.461251  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:01.461259  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:01.461351  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:01.500675  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:01.500705  203272 cri.go:89] found id: ""
	I1206 09:41:01.500714  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:01.500778  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:01.505677  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:01.505762  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:01.537205  203272 cri.go:89] found id: ""
	I1206 09:41:01.537238  203272 logs.go:282] 0 containers: []
	W1206 09:41:01.537247  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:01.537260  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:01.537328  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:01.580916  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:01.580997  203272 cri.go:89] found id: ""
	I1206 09:41:01.581020  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:01.581116  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:01.586130  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:01.586257  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:01.621413  203272 cri.go:89] found id: ""
	I1206 09:41:01.621479  203272 logs.go:282] 0 containers: []
	W1206 09:41:01.621501  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:01.621519  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:01.621608  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:01.656425  203272 cri.go:89] found id: ""
	I1206 09:41:01.656504  203272 logs.go:282] 0 containers: []
	W1206 09:41:01.656528  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:01.656571  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:01.656603  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:01.703330  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:01.703453  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:01.738314  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:01.738433  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:01.820568  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:01.820592  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:01.910892  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:01.910974  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:02.010974  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:02.010993  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:02.011005  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:02.061619  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:02.061654  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:02.076508  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:02.076539  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:02.140238  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:02.140274  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:04.701938  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:04.712099  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:04.712178  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:04.737585  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:04.737606  203272 cri.go:89] found id: ""
	I1206 09:41:04.737615  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:04.737672  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:04.741639  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:04.741712  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:04.768293  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:04.768318  203272 cri.go:89] found id: ""
	I1206 09:41:04.768326  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:04.768382  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:04.772335  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:04.772436  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:04.802207  203272 cri.go:89] found id: ""
	I1206 09:41:04.802233  203272 logs.go:282] 0 containers: []
	W1206 09:41:04.802243  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:04.802249  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:04.802307  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:04.826580  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:04.826600  203272 cri.go:89] found id: ""
	I1206 09:41:04.826608  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:04.826668  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:04.830314  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:04.830386  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:04.856213  203272 cri.go:89] found id: ""
	I1206 09:41:04.856238  203272 logs.go:282] 0 containers: []
	W1206 09:41:04.856247  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:04.856253  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:04.856311  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:04.881487  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:04.881510  203272 cri.go:89] found id: ""
	I1206 09:41:04.881518  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:04.881595  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:04.885054  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:04.885125  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:04.912433  203272 cri.go:89] found id: ""
	I1206 09:41:04.912467  203272 logs.go:282] 0 containers: []
	W1206 09:41:04.912476  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:04.912501  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:04.912586  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:04.939028  203272 cri.go:89] found id: ""
	I1206 09:41:04.939051  203272 logs.go:282] 0 containers: []
	W1206 09:41:04.939060  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:04.939073  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:04.939083  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:04.972239  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:04.972276  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:05.038449  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:05.038530  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:05.076275  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:05.076311  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:05.109800  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:05.109836  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:05.142330  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:05.142358  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:05.201923  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:05.201960  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:05.215536  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:05.215606  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:05.309276  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:05.309350  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:05.309377  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:07.860915  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:07.871495  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:07.871569  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:07.898657  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:07.898679  203272 cri.go:89] found id: ""
	I1206 09:41:07.898688  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:07.898745  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:07.902537  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:07.902608  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:07.927646  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:07.927668  203272 cri.go:89] found id: ""
	I1206 09:41:07.927677  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:07.927731  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:07.931385  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:07.931457  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:07.962678  203272 cri.go:89] found id: ""
	I1206 09:41:07.962703  203272 logs.go:282] 0 containers: []
	W1206 09:41:07.962712  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:07.962718  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:07.962779  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:08.000700  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:08.000723  203272 cri.go:89] found id: ""
	I1206 09:41:08.000731  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:08.000797  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:08.008889  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:08.009022  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:08.044424  203272 cri.go:89] found id: ""
	I1206 09:41:08.044448  203272 logs.go:282] 0 containers: []
	W1206 09:41:08.044460  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:08.044469  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:08.044534  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:08.072065  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:08.072094  203272 cri.go:89] found id: ""
	I1206 09:41:08.072103  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:08.072162  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:08.076168  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:08.076287  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:08.103699  203272 cri.go:89] found id: ""
	I1206 09:41:08.103729  203272 logs.go:282] 0 containers: []
	W1206 09:41:08.103738  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:08.103745  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:08.103806  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:08.132948  203272 cri.go:89] found id: ""
	I1206 09:41:08.132970  203272 logs.go:282] 0 containers: []
	W1206 09:41:08.132978  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:08.132991  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:08.133002  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:08.192388  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:08.192422  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:08.205455  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:08.205483  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:08.242716  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:08.242753  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:08.273679  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:08.273717  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:08.310826  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:08.310851  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:08.379789  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:08.379854  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:08.379873  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:08.414507  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:08.414540  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:08.447069  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:08.447101  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:10.979523  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:10.995844  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:10.995913  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:11.033685  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:11.033711  203272 cri.go:89] found id: ""
	I1206 09:41:11.033720  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:11.033775  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:11.038817  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:11.038895  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:11.071162  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:11.071190  203272 cri.go:89] found id: ""
	I1206 09:41:11.071199  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:11.071262  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:11.075611  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:11.075693  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:11.105095  203272 cri.go:89] found id: ""
	I1206 09:41:11.105122  203272 logs.go:282] 0 containers: []
	W1206 09:41:11.105131  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:11.105138  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:11.105198  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:11.133263  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:11.133293  203272 cri.go:89] found id: ""
	I1206 09:41:11.133303  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:11.133409  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:11.137743  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:11.137841  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:11.164964  203272 cri.go:89] found id: ""
	I1206 09:41:11.164990  203272 logs.go:282] 0 containers: []
	W1206 09:41:11.164999  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:11.165007  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:11.165074  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:11.196069  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:11.196093  203272 cri.go:89] found id: ""
	I1206 09:41:11.196104  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:11.196165  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:11.200288  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:11.200362  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:11.227670  203272 cri.go:89] found id: ""
	I1206 09:41:11.227693  203272 logs.go:282] 0 containers: []
	W1206 09:41:11.227702  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:11.227708  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:11.227767  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:11.254388  203272 cri.go:89] found id: ""
	I1206 09:41:11.254413  203272 logs.go:282] 0 containers: []
	W1206 09:41:11.254422  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:11.254438  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:11.254450  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:11.319165  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:11.319184  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:11.319197  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:11.355367  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:11.355416  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:11.404719  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:11.404751  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:11.417936  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:11.418014  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:11.451184  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:11.451219  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:11.482604  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:11.482636  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:11.512612  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:11.512648  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:11.543145  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:11.543184  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:14.104686  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:14.115208  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:14.115276  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:14.143285  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:14.143311  203272 cri.go:89] found id: ""
	I1206 09:41:14.143320  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:14.143439  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:14.148016  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:14.148101  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:14.173034  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:14.173057  203272 cri.go:89] found id: ""
	I1206 09:41:14.173065  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:14.173149  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:14.176950  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:14.177026  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:14.202784  203272 cri.go:89] found id: ""
	I1206 09:41:14.202810  203272 logs.go:282] 0 containers: []
	W1206 09:41:14.202818  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:14.202824  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:14.202889  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:14.234101  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:14.234130  203272 cri.go:89] found id: ""
	I1206 09:41:14.234146  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:14.234208  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:14.238266  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:14.238342  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:14.264094  203272 cri.go:89] found id: ""
	I1206 09:41:14.264120  203272 logs.go:282] 0 containers: []
	W1206 09:41:14.264130  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:14.264136  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:14.264203  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:14.292554  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:14.292573  203272 cri.go:89] found id: ""
	I1206 09:41:14.292580  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:14.292634  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:14.297208  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:14.297282  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:14.327219  203272 cri.go:89] found id: ""
	I1206 09:41:14.327244  203272 logs.go:282] 0 containers: []
	W1206 09:41:14.327253  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:14.327260  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:14.327318  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:14.352310  203272 cri.go:89] found id: ""
	I1206 09:41:14.352387  203272 logs.go:282] 0 containers: []
	W1206 09:41:14.352409  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:14.352452  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:14.352481  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:14.389006  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:14.389031  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:14.446152  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:14.446189  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:14.459218  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:14.459247  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:14.530837  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:14.530855  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:14.530868  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:14.565959  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:14.565993  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:14.605763  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:14.605800  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:14.639096  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:14.639126  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:14.675985  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:14.676020  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:17.209470  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:17.223517  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:17.223599  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:17.260021  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:17.260117  203272 cri.go:89] found id: ""
	I1206 09:41:17.260139  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:17.260251  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:17.268304  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:17.268375  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:17.305632  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:17.305651  203272 cri.go:89] found id: ""
	I1206 09:41:17.305659  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:17.305713  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:17.310426  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:17.310507  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:17.356311  203272 cri.go:89] found id: ""
	I1206 09:41:17.356331  203272 logs.go:282] 0 containers: []
	W1206 09:41:17.356340  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:17.356351  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:17.356408  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:17.397619  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:17.397694  203272 cri.go:89] found id: ""
	I1206 09:41:17.397716  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:17.397802  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:17.402047  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:17.402173  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:17.441663  203272 cri.go:89] found id: ""
	I1206 09:41:17.441744  203272 logs.go:282] 0 containers: []
	W1206 09:41:17.441768  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:17.441787  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:17.441893  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:17.473665  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:17.473744  203272 cri.go:89] found id: ""
	I1206 09:41:17.473767  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:17.473857  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:17.478551  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:17.478678  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:17.516660  203272 cri.go:89] found id: ""
	I1206 09:41:17.516738  203272 logs.go:282] 0 containers: []
	W1206 09:41:17.516762  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:17.516781  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:17.516887  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:17.554419  203272 cri.go:89] found id: ""
	I1206 09:41:17.554444  203272 logs.go:282] 0 containers: []
	W1206 09:41:17.554453  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:17.554469  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:17.554479  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:17.613253  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:17.613279  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:17.688812  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:17.688885  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:17.706477  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:17.706505  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:17.792322  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:17.792358  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:17.921401  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:17.921422  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:17.921434  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:17.961082  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:17.961111  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:18.004044  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:18.004102  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:18.042957  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:18.042986  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:20.573246  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:20.584088  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:20.584166  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:20.614787  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:20.614811  203272 cri.go:89] found id: ""
	I1206 09:41:20.614819  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:20.614885  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:20.619936  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:20.620027  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:20.648459  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:20.648485  203272 cri.go:89] found id: ""
	I1206 09:41:20.648494  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:20.648585  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:20.653042  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:20.653111  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:20.693407  203272 cri.go:89] found id: ""
	I1206 09:41:20.693429  203272 logs.go:282] 0 containers: []
	W1206 09:41:20.693438  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:20.693444  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:20.693501  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:20.736090  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:20.736108  203272 cri.go:89] found id: ""
	I1206 09:41:20.736115  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:20.736166  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:20.741640  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:20.741741  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:20.792271  203272 cri.go:89] found id: ""
	I1206 09:41:20.792296  203272 logs.go:282] 0 containers: []
	W1206 09:41:20.792306  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:20.792312  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:20.792430  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:20.848041  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:20.848065  203272 cri.go:89] found id: ""
	I1206 09:41:20.848073  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:20.848165  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:20.855578  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:20.855769  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:20.895177  203272 cri.go:89] found id: ""
	I1206 09:41:20.895205  203272 logs.go:282] 0 containers: []
	W1206 09:41:20.895219  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:20.895225  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:20.895446  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:20.931362  203272 cri.go:89] found id: ""
	I1206 09:41:20.931444  203272 logs.go:282] 0 containers: []
	W1206 09:41:20.931458  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:20.931520  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:20.931540  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:20.958183  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:20.958221  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:21.013329  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:21.013364  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:21.060395  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:21.060426  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:21.107138  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:21.107171  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:21.150665  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:21.150694  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:21.187919  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:21.187956  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:21.222922  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:21.222948  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:21.291591  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:21.291658  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:21.389715  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:23.889926  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:23.900119  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:23.900197  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:23.928511  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:23.928535  203272 cri.go:89] found id: ""
	I1206 09:41:23.928543  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:23.928607  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:23.932417  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:23.932488  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:23.960315  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:23.960334  203272 cri.go:89] found id: ""
	I1206 09:41:23.960342  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:23.960396  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:23.964122  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:23.964190  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:23.991450  203272 cri.go:89] found id: ""
	I1206 09:41:23.991473  203272 logs.go:282] 0 containers: []
	W1206 09:41:23.991482  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:23.991488  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:23.991549  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:24.021126  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:24.021151  203272 cri.go:89] found id: ""
	I1206 09:41:24.021160  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:24.021225  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:24.025081  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:24.025186  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:24.054326  203272 cri.go:89] found id: ""
	I1206 09:41:24.054353  203272 logs.go:282] 0 containers: []
	W1206 09:41:24.054362  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:24.054368  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:24.054428  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:24.082686  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:24.082726  203272 cri.go:89] found id: ""
	I1206 09:41:24.082735  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:24.082795  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:24.086735  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:24.086811  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:24.113115  203272 cri.go:89] found id: ""
	I1206 09:41:24.113137  203272 logs.go:282] 0 containers: []
	W1206 09:41:24.113145  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:24.113152  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:24.113212  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:24.139132  203272 cri.go:89] found id: ""
	I1206 09:41:24.139157  203272 logs.go:282] 0 containers: []
	W1206 09:41:24.139166  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:24.139180  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:24.139192  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:24.169784  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:24.169814  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:24.212103  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:24.212132  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:24.225095  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:24.225123  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:24.254502  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:24.254540  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:24.312695  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:24.312737  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:24.382335  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:24.382357  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:24.382370  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:24.419522  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:24.419557  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:24.454324  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:24.454404  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:27.011563  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:27.022954  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:27.023027  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:27.050672  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:27.050691  203272 cri.go:89] found id: ""
	I1206 09:41:27.050698  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:27.050752  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:27.054656  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:27.054728  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:27.082237  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:27.082258  203272 cri.go:89] found id: ""
	I1206 09:41:27.082265  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:27.082319  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:27.086132  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:27.086219  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:27.111255  203272 cri.go:89] found id: ""
	I1206 09:41:27.111278  203272 logs.go:282] 0 containers: []
	W1206 09:41:27.111287  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:27.111293  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:27.111352  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:27.142688  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:27.142713  203272 cri.go:89] found id: ""
	I1206 09:41:27.142722  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:27.142779  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:27.146456  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:27.146529  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:27.171053  203272 cri.go:89] found id: ""
	I1206 09:41:27.171079  203272 logs.go:282] 0 containers: []
	W1206 09:41:27.171100  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:27.171107  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:27.171190  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:27.200752  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:27.200814  203272 cri.go:89] found id: ""
	I1206 09:41:27.200835  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:27.200895  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:27.204694  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:27.204765  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:27.237067  203272 cri.go:89] found id: ""
	I1206 09:41:27.237093  203272 logs.go:282] 0 containers: []
	W1206 09:41:27.237103  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:27.237118  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:27.237178  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:27.263520  203272 cri.go:89] found id: ""
	I1206 09:41:27.263546  203272 logs.go:282] 0 containers: []
	W1206 09:41:27.263556  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:27.263570  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:27.263581  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:27.322575  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:27.322611  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:27.356614  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:27.356649  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:27.394653  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:27.394688  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:27.425388  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:27.425424  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:27.468469  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:27.468498  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:27.483514  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:27.483541  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:27.567819  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:27.567891  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:27.567921  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:27.600699  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:27.600731  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:30.132848  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:30.147051  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:30.147132  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:30.175192  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:30.175212  203272 cri.go:89] found id: ""
	I1206 09:41:30.175221  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:30.175279  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:30.179464  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:30.179536  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:30.206980  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:30.207001  203272 cri.go:89] found id: ""
	I1206 09:41:30.207009  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:30.207083  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:30.210917  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:30.210988  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:30.242043  203272 cri.go:89] found id: ""
	I1206 09:41:30.242122  203272 logs.go:282] 0 containers: []
	W1206 09:41:30.242146  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:30.242165  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:30.242266  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:30.268983  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:30.269003  203272 cri.go:89] found id: ""
	I1206 09:41:30.269011  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:30.269064  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:30.273199  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:30.273279  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:30.303248  203272 cri.go:89] found id: ""
	I1206 09:41:30.303274  203272 logs.go:282] 0 containers: []
	W1206 09:41:30.303283  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:30.303290  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:30.303347  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:30.328366  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:30.328388  203272 cri.go:89] found id: ""
	I1206 09:41:30.328397  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:30.328450  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:30.332141  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:30.332211  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:30.357348  203272 cri.go:89] found id: ""
	I1206 09:41:30.357424  203272 logs.go:282] 0 containers: []
	W1206 09:41:30.357440  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:30.357447  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:30.357506  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:30.387031  203272 cri.go:89] found id: ""
	I1206 09:41:30.387055  203272 logs.go:282] 0 containers: []
	W1206 09:41:30.387082  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:30.387099  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:30.387115  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:30.399749  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:30.399778  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:30.464925  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:30.464947  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:30.464970  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:30.518693  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:30.518724  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:30.551411  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:30.551446  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:30.588240  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:30.588269  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:30.654328  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:30.654363  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:30.687672  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:30.687712  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:30.724640  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:30.724675  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:33.257329  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:33.268032  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:33.268108  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:33.293470  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:33.293492  203272 cri.go:89] found id: ""
	I1206 09:41:33.293509  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:33.293569  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:33.298183  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:33.298280  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:33.328029  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:33.328053  203272 cri.go:89] found id: ""
	I1206 09:41:33.328061  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:33.328121  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:33.331930  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:33.332008  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:33.356832  203272 cri.go:89] found id: ""
	I1206 09:41:33.356856  203272 logs.go:282] 0 containers: []
	W1206 09:41:33.356865  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:33.356871  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:33.356930  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:33.387253  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:33.387274  203272 cri.go:89] found id: ""
	I1206 09:41:33.387282  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:33.387336  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:33.392558  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:33.392639  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:33.418431  203272 cri.go:89] found id: ""
	I1206 09:41:33.418459  203272 logs.go:282] 0 containers: []
	W1206 09:41:33.418470  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:33.418477  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:33.418543  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:33.444488  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:33.444510  203272 cri.go:89] found id: ""
	I1206 09:41:33.444518  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:33.444590  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:33.448446  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:33.448569  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:33.484717  203272 cri.go:89] found id: ""
	I1206 09:41:33.484742  203272 logs.go:282] 0 containers: []
	W1206 09:41:33.484752  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:33.484758  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:33.484839  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:33.536639  203272 cri.go:89] found id: ""
	I1206 09:41:33.536711  203272 logs.go:282] 0 containers: []
	W1206 09:41:33.536735  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:33.536761  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:33.536799  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:33.574994  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:33.575029  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:33.607802  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:33.607833  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:33.638176  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:33.638216  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:33.651961  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:33.651991  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:33.681010  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:33.681042  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:33.743343  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:33.743387  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:33.807645  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:33.807667  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:33.807690  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:33.841488  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:33.841526  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:36.375620  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:36.387920  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:36.388000  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:36.428724  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:36.428744  203272 cri.go:89] found id: ""
	I1206 09:41:36.428751  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:36.428803  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:36.432803  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:36.432872  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:36.471841  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:36.471865  203272 cri.go:89] found id: ""
	I1206 09:41:36.471875  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:36.471929  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:36.480220  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:36.480291  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:36.517597  203272 cri.go:89] found id: ""
	I1206 09:41:36.517620  203272 logs.go:282] 0 containers: []
	W1206 09:41:36.517628  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:36.517635  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:36.517692  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:36.587351  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:36.587404  203272 cri.go:89] found id: ""
	I1206 09:41:36.587413  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:36.587477  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:36.591983  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:36.592052  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:36.626028  203272 cri.go:89] found id: ""
	I1206 09:41:36.626053  203272 logs.go:282] 0 containers: []
	W1206 09:41:36.626062  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:36.626068  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:36.626121  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:36.663146  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:36.663179  203272 cri.go:89] found id: ""
	I1206 09:41:36.663187  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:36.663245  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:36.667510  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:36.667593  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:36.713085  203272 cri.go:89] found id: ""
	I1206 09:41:36.713108  203272 logs.go:282] 0 containers: []
	W1206 09:41:36.713117  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:36.713123  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:36.713198  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:36.740337  203272 cri.go:89] found id: ""
	I1206 09:41:36.740360  203272 logs.go:282] 0 containers: []
	W1206 09:41:36.740369  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:36.740381  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:36.740395  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:36.802954  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:36.802993  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:36.852004  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:36.852081  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:36.891984  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:36.892021  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:36.942424  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:36.942493  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:37.020964  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:37.021063  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:37.065924  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:37.066008  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:37.107519  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:37.107554  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:37.122714  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:37.122743  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:37.190586  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:39.691283  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:39.704523  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:41:39.704595  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:41:39.734562  203272 cri.go:89] found id: "9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:39.734588  203272 cri.go:89] found id: ""
	I1206 09:41:39.734596  203272 logs.go:282] 1 containers: [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21]
	I1206 09:41:39.734649  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:39.739980  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:41:39.740061  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:41:39.778665  203272 cri.go:89] found id: "3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:39.778695  203272 cri.go:89] found id: ""
	I1206 09:41:39.778704  203272 logs.go:282] 1 containers: [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841]
	I1206 09:41:39.778765  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:39.782803  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:41:39.782890  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:41:39.824575  203272 cri.go:89] found id: ""
	I1206 09:41:39.824610  203272 logs.go:282] 0 containers: []
	W1206 09:41:39.824619  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:41:39.824626  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:41:39.824692  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:41:39.856670  203272 cri.go:89] found id: "d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:39.856744  203272 cri.go:89] found id: ""
	I1206 09:41:39.856766  203272 logs.go:282] 1 containers: [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201]
	I1206 09:41:39.856854  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:39.861439  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:41:39.861565  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:41:39.891714  203272 cri.go:89] found id: ""
	I1206 09:41:39.891786  203272 logs.go:282] 0 containers: []
	W1206 09:41:39.891822  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:41:39.891848  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:41:39.891942  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:41:39.923824  203272 cri.go:89] found id: "fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:39.923900  203272 cri.go:89] found id: ""
	I1206 09:41:39.923922  203272 logs.go:282] 1 containers: [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599]
	I1206 09:41:39.924009  203272 ssh_runner.go:195] Run: which crictl
	I1206 09:41:39.930807  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:41:39.930934  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:41:39.977963  203272 cri.go:89] found id: ""
	I1206 09:41:39.978038  203272 logs.go:282] 0 containers: []
	W1206 09:41:39.978059  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:41:39.978081  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:41:39.978188  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:41:40.009894  203272 cri.go:89] found id: ""
	I1206 09:41:40.009978  203272 logs.go:282] 0 containers: []
	W1206 09:41:40.010003  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:41:40.010048  203272 logs.go:123] Gathering logs for etcd [3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841] ...
	I1206 09:41:40.010081  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841"
	I1206 09:41:40.064497  203272 logs.go:123] Gathering logs for kube-scheduler [d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201] ...
	I1206 09:41:40.064580  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201"
	I1206 09:41:40.109095  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:41:40.109182  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:41:40.147255  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:41:40.147295  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:41:40.190458  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:41:40.190494  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:41:40.313814  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:41:40.313897  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 09:41:40.327983  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:41:40.328009  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:41:40.427324  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:41:40.427345  203272 logs.go:123] Gathering logs for kube-controller-manager [fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599] ...
	I1206 09:41:40.427358  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599"
	I1206 09:41:40.466952  203272 logs.go:123] Gathering logs for kube-apiserver [9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21] ...
	I1206 09:41:40.466985  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21"
	I1206 09:41:43.040654  203272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:43.050846  203272 kubeadm.go:602] duration metric: took 4m3.538770581s to restartPrimaryControlPlane
	W1206 09:41:43.050918  203272 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 09:41:43.050979  203272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:41:43.517764  203272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:41:43.531565  203272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:41:43.540062  203272 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:41:43.540124  203272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:41:43.547991  203272 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:41:43.548017  203272 kubeadm.go:158] found existing configuration files:
	
	I1206 09:41:43.548068  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:41:43.556396  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:41:43.556458  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:41:43.564154  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:41:43.572300  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:41:43.572395  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:41:43.588210  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:41:43.596915  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:41:43.596984  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:41:43.604693  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:41:43.613875  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:41:43.613990  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:41:43.622107  203272 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:41:43.671119  203272 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:41:43.671504  203272 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:41:43.746453  203272 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:41:43.746528  203272 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:41:43.746570  203272 kubeadm.go:319] OS: Linux
	I1206 09:41:43.746618  203272 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:41:43.746670  203272 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:41:43.746721  203272 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:41:43.746772  203272 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:41:43.746822  203272 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:41:43.746874  203272 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:41:43.746922  203272 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:41:43.746974  203272 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:41:43.747033  203272 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:41:43.819967  203272 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:41:43.820083  203272 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:41:43.820198  203272 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:41:54.277322  203272 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:41:54.280289  203272 out.go:252]   - Generating certificates and keys ...
	I1206 09:41:54.280389  203272 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:41:54.280455  203272 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:41:54.280531  203272 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 09:41:54.280594  203272 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 09:41:54.280663  203272 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 09:41:54.280717  203272 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 09:41:54.280780  203272 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 09:41:54.280841  203272 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 09:41:54.280914  203272 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 09:41:54.280986  203272 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 09:41:54.281024  203272 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 09:41:54.281079  203272 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:41:54.402522  203272 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:41:54.587044  203272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:41:54.868361  203272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:41:55.147798  203272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:41:55.553972  203272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:41:55.554819  203272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:41:55.557333  203272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:41:55.561169  203272 out.go:252]   - Booting up control plane ...
	I1206 09:41:55.561280  203272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:41:55.561358  203272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:41:55.561425  203272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:41:55.584575  203272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:41:55.584933  203272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:41:55.593617  203272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:41:55.594042  203272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:41:55.594090  203272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:41:55.727642  203272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:41:55.727864  203272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:45:55.727656  203272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000233276s
	I1206 09:45:55.727689  203272 kubeadm.go:319] 
	I1206 09:45:55.727747  203272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:45:55.727780  203272 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:45:55.727885  203272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:45:55.727891  203272 kubeadm.go:319] 
	I1206 09:45:55.727995  203272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:45:55.728028  203272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:45:55.728059  203272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:45:55.728063  203272 kubeadm.go:319] 
	I1206 09:45:55.731896  203272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:45:55.732319  203272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:45:55.732427  203272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:45:55.732662  203272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 09:45:55.732669  203272 kubeadm.go:319] 
	I1206 09:45:55.732737  203272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 09:45:55.732845  203272 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233276s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233276s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:45:55.732926  203272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:45:56.167254  203272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:45:56.181822  203272 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:45:56.181885  203272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:45:56.192203  203272 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:45:56.192221  203272 kubeadm.go:158] found existing configuration files:
	
	I1206 09:45:56.192272  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:45:56.200784  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:45:56.200911  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:45:56.208703  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:45:56.217281  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:45:56.217342  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:45:56.225224  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:45:56.234005  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:45:56.234122  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:45:56.242119  203272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:45:56.251174  203272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:45:56.251299  203272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:45:56.259237  203272 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:45:56.309575  203272 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:45:56.310148  203272 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:45:56.404955  203272 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:45:56.405107  203272 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:45:56.405177  203272 kubeadm.go:319] OS: Linux
	I1206 09:45:56.405267  203272 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:45:56.405327  203272 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:45:56.405378  203272 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:45:56.405429  203272 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:45:56.405481  203272 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:45:56.405540  203272 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:45:56.405588  203272 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:45:56.405639  203272 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:45:56.405689  203272 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:45:56.488495  203272 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:45:56.488669  203272 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:45:56.488835  203272 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:45:56.495245  203272 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:45:56.508279  203272 out.go:252]   - Generating certificates and keys ...
	I1206 09:45:56.508452  203272 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:45:56.508565  203272 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:45:56.508674  203272 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 09:45:56.508782  203272 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 09:45:56.508883  203272 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 09:45:56.508945  203272 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 09:45:56.509013  203272 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 09:45:56.509078  203272 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 09:45:56.509157  203272 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 09:45:56.509234  203272 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 09:45:56.509275  203272 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 09:45:56.509335  203272 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:45:56.925975  203272 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:45:57.076777  203272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:45:57.499994  203272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:45:57.561472  203272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:45:58.126317  203272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:45:58.127608  203272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:45:58.129806  203272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:45:58.182452  203272 out.go:252]   - Booting up control plane ...
	I1206 09:45:58.182605  203272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:45:58.182692  203272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:45:58.182804  203272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:45:58.182962  203272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:45:58.183081  203272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:45:58.183215  203272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:45:58.183306  203272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:45:58.183360  203272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:45:58.299840  203272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:45:58.299979  203272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:49:58.299284  203272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001053633s
	I1206 09:49:58.299721  203272 kubeadm.go:319] 
	I1206 09:49:58.299814  203272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:49:58.299858  203272 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:49:58.299986  203272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:49:58.299997  203272 kubeadm.go:319] 
	I1206 09:49:58.300130  203272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:49:58.300171  203272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:49:58.300208  203272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:49:58.300213  203272 kubeadm.go:319] 
	I1206 09:49:58.306843  203272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:49:58.307315  203272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:49:58.307451  203272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:49:58.307752  203272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 09:49:58.307765  203272 kubeadm.go:319] 
	I1206 09:49:58.307854  203272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:49:58.307912  203272 kubeadm.go:403] duration metric: took 12m18.883948993s to StartCluster
	I1206 09:49:58.307955  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:58.308019  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:58.345982  203272 cri.go:89] found id: ""
	I1206 09:49:58.346005  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.346017  203272 logs.go:284] No container was found matching "kube-apiserver"
	I1206 09:49:58.346030  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:49:58.346089  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:58.395597  203272 cri.go:89] found id: ""
	I1206 09:49:58.395635  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.395644  203272 logs.go:284] No container was found matching "etcd"
	I1206 09:49:58.395650  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:49:58.395725  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:58.426022  203272 cri.go:89] found id: ""
	I1206 09:49:58.426043  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.426051  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:49:58.426057  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:58.426112  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:58.456770  203272 cri.go:89] found id: ""
	I1206 09:49:58.456791  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.456799  203272 logs.go:284] No container was found matching "kube-scheduler"
	I1206 09:49:58.456806  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:58.456869  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:58.488515  203272 cri.go:89] found id: ""
	I1206 09:49:58.488539  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.488554  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:58.488560  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:58.488621  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:58.518890  203272 cri.go:89] found id: ""
	I1206 09:49:58.518912  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.518920  203272 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 09:49:58.518927  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:58.518993  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:58.549882  203272 cri.go:89] found id: ""
	I1206 09:49:58.549962  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.549985  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:58.550024  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:58.550134  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:58.580252  203272 cri.go:89] found id: ""
	I1206 09:49:58.580326  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.580350  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:58.580372  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:58.580417  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:58.679112  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:58.679135  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:49:58.679147  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:49:58.731340  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:49:58.731440  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:58.767161  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:58.767186  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:58.833114  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:58.833189  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 09:49:58.847191  203272 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001053633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 09:49:58.847337  203272 out.go:285] * 
	* 
	W1206 09:49:58.847451  203272 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001053633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001053633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:49:58.847509  203272 out.go:285] * 
	* 
	W1206 09:49:58.850373  203272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:49:58.857718  203272 out.go:203] 
	W1206 09:49:58.860773  203272 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001053633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001053633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:49:58.861136  203272 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 09:49:58.861242  203272 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 09:49:58.864280  203272 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-228904 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-228904 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-228904 version --output=json: exit status 1 (118.411448ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-06 09:49:59.935616717 +0000 UTC m=+4893.409821227
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-228904
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-228904:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d7a5884935ebe834220949b6f0eaaf13b79ad30d2fc9bb97b50476d1328aaf1a",
	        "Created": "2025-12-06T09:36:45.794624724Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203767,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:37:22.698389135Z",
	            "FinishedAt": "2025-12-06T09:37:21.556374679Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d7a5884935ebe834220949b6f0eaaf13b79ad30d2fc9bb97b50476d1328aaf1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d7a5884935ebe834220949b6f0eaaf13b79ad30d2fc9bb97b50476d1328aaf1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d7a5884935ebe834220949b6f0eaaf13b79ad30d2fc9bb97b50476d1328aaf1a/hosts",
	        "LogPath": "/var/lib/docker/containers/d7a5884935ebe834220949b6f0eaaf13b79ad30d2fc9bb97b50476d1328aaf1a/d7a5884935ebe834220949b6f0eaaf13b79ad30d2fc9bb97b50476d1328aaf1a-json.log",
	        "Name": "/kubernetes-upgrade-228904",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-228904:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-228904",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d7a5884935ebe834220949b6f0eaaf13b79ad30d2fc9bb97b50476d1328aaf1a",
	                "LowerDir": "/var/lib/docker/overlay2/d9c4054876e56d8d924bf766a94eeeaff036da9a5f6df4094cf9b70a42a123fa-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9c4054876e56d8d924bf766a94eeeaff036da9a5f6df4094cf9b70a42a123fa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9c4054876e56d8d924bf766a94eeeaff036da9a5f6df4094cf9b70a42a123fa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9c4054876e56d8d924bf766a94eeeaff036da9a5f6df4094cf9b70a42a123fa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-228904",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-228904/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-228904",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-228904",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-228904",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f245ced4645ce7cdf769690852d742a122c460efe58c047f56ba4b797015ab7a",
	            "SandboxKey": "/var/run/docker/netns/f245ced4645c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-228904": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:3d:9b:db:5b:1d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db5e96975cc6c42bc37725ad1ae4cebdbfc561cbc790e22b2bd38e5682d5a93a",
	                    "EndpointID": "fe2466bf57e816b0a9246dc53a01c11edae0d7a7d62a59d0228b6b706e8a93fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-228904",
	                        "d7a5884935eb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-228904 -n kubernetes-upgrade-228904
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-228904 -n kubernetes-upgrade-228904: exit status 2 (749.786748ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-228904 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-228904 logs -n 25: (1.175506953s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                              ARGS                                                                                                               │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-793086 sudo systemctl cat docker --no-pager                                                                                                                                                                           │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo cat /etc/docker/daemon.json                                                                                                                                                                               │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo docker system info                                                                                                                                                                                        │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                       │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                       │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                  │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                            │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo cri-dockerd --version                                                                                                                                                                                     │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                       │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo systemctl cat containerd --no-pager                                                                                                                                                                       │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo cat /etc/containerd/config.toml                                                                                                                                                                           │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo containerd config dump                                                                                                                                                                                    │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo systemctl status crio --all --full --no-pager                                                                                                                                                             │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo systemctl cat crio --no-pager                                                                                                                                                                             │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                   │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ ssh     │ -p cilium-793086 sudo crio config                                                                                                                                                                                               │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │                     │
	│ delete  │ -p cilium-793086                                                                                                                                                                                                                │ cilium-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:45 UTC │
	│ start   │ -p force-systemd-env-003791 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                │ force-systemd-env-003791 │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:45 UTC │
	│ ssh     │ force-systemd-env-003791 ssh cat /etc/containerd/config.toml                                                                                                                                                                    │ force-systemd-env-003791 │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:45 UTC │
	│ delete  │ -p force-systemd-env-003791                                                                                                                                                                                                     │ force-systemd-env-003791 │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:45 UTC │
	│ start   │ -p cert-expiration-980262 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                    │ cert-expiration-980262   │ jenkins │ v1.37.0 │ 06 Dec 25 09:45 UTC │ 06 Dec 25 09:46 UTC │
	│ start   │ -p cert-expiration-980262 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                 │ cert-expiration-980262   │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:49 UTC │
	│ delete  │ -p cert-expiration-980262                                                                                                                                                                                                       │ cert-expiration-980262   │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │ 06 Dec 25 09:49 UTC │
	│ start   │ -p cert-options-117308 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd │ cert-options-117308      │ jenkins │ v1.37.0 │ 06 Dec 25 09:49 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:49:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:49:36.517068  248617 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:49:36.517180  248617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:49:36.517183  248617 out.go:374] Setting ErrFile to fd 2...
	I1206 09:49:36.517187  248617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:49:36.517462  248617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:49:36.517924  248617 out.go:368] Setting JSON to false
	I1206 09:49:36.518834  248617 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5528,"bootTime":1765009049,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:49:36.518894  248617 start.go:143] virtualization:  
	I1206 09:49:36.525114  248617 out.go:179] * [cert-options-117308] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:49:36.528539  248617 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:49:36.529230  248617 notify.go:221] Checking for updates...
	I1206 09:49:36.534959  248617 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:49:36.538302  248617 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:49:36.541486  248617 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:49:36.544571  248617 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:49:36.547574  248617 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:49:36.551289  248617 config.go:182] Loaded profile config "kubernetes-upgrade-228904": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:49:36.551414  248617 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:49:36.580253  248617 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:49:36.580364  248617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:49:36.637347  248617 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:49:36.628141924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:49:36.637444  248617 docker.go:319] overlay module found
	I1206 09:49:36.640714  248617 out.go:179] * Using the docker driver based on user configuration
	I1206 09:49:36.643683  248617 start.go:309] selected driver: docker
	I1206 09:49:36.643693  248617 start.go:927] validating driver "docker" against <nil>
	I1206 09:49:36.643704  248617 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:49:36.644452  248617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:49:36.696487  248617 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:49:36.687626494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:49:36.696621  248617 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:49:36.696841  248617 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:49:36.699652  248617 out.go:179] * Using Docker driver with root privileges
	I1206 09:49:36.702516  248617 cni.go:84] Creating CNI manager for ""
	I1206 09:49:36.702576  248617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:49:36.702585  248617 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:49:36.702665  248617 start.go:353] cluster config:
	{Name:cert-options-117308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-117308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.
0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1206 09:49:36.705783  248617 out.go:179] * Starting "cert-options-117308" primary control-plane node in "cert-options-117308" cluster
	I1206 09:49:36.708505  248617 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 09:49:36.711510  248617 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:49:36.714374  248617 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:49:36.714423  248617 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1206 09:49:36.714429  248617 cache.go:65] Caching tarball of preloaded images
	I1206 09:49:36.714457  248617 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:49:36.714513  248617 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 09:49:36.714522  248617 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 09:49:36.714626  248617 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/config.json ...
	I1206 09:49:36.714641  248617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/config.json: {Name:mk64469bcddd1b77d82c1c362bbec692737e8c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:36.733869  248617 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:49:36.733880  248617 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:49:36.733892  248617 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:49:36.733919  248617 start.go:360] acquireMachinesLock for cert-options-117308: {Name:mk8f0976adf9df9bac6c7c993f92afc1dff76824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:49:36.734011  248617 start.go:364] duration metric: took 78.679µs to acquireMachinesLock for "cert-options-117308"
	I1206 09:49:36.734033  248617 start.go:93] Provisioning new machine with config: &{Name:cert-options-117308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-117308 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:49:36.734110  248617 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:49:36.737718  248617 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:49:36.737969  248617 start.go:159] libmachine.API.Create for "cert-options-117308" (driver="docker")
	I1206 09:49:36.738000  248617 client.go:173] LocalClient.Create starting
	I1206 09:49:36.738074  248617 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 09:49:36.738111  248617 main.go:143] libmachine: Decoding PEM data...
	I1206 09:49:36.738128  248617 main.go:143] libmachine: Parsing certificate...
	I1206 09:49:36.738188  248617 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 09:49:36.738206  248617 main.go:143] libmachine: Decoding PEM data...
	I1206 09:49:36.738216  248617 main.go:143] libmachine: Parsing certificate...
	I1206 09:49:36.738572  248617 cli_runner.go:164] Run: docker network inspect cert-options-117308 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:49:36.754929  248617 cli_runner.go:211] docker network inspect cert-options-117308 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:49:36.755027  248617 network_create.go:284] running [docker network inspect cert-options-117308] to gather additional debugging logs...
	I1206 09:49:36.755044  248617 cli_runner.go:164] Run: docker network inspect cert-options-117308
	W1206 09:49:36.770950  248617 cli_runner.go:211] docker network inspect cert-options-117308 returned with exit code 1
	I1206 09:49:36.770976  248617 network_create.go:287] error running [docker network inspect cert-options-117308]: docker network inspect cert-options-117308: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-options-117308 not found
	I1206 09:49:36.770999  248617 network_create.go:289] output of [docker network inspect cert-options-117308]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-options-117308 not found
	
	** /stderr **
	I1206 09:49:36.771106  248617 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:49:36.792889  248617 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
	I1206 09:49:36.793252  248617 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6479799cc46a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:b3:f8:bd:10:a1} reservation:<nil>}
	I1206 09:49:36.793609  248617 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-045bb1cdddf9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:c6:f0:a4:f5:8d} reservation:<nil>}
	I1206 09:49:36.793991  248617 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-db5e96975cc6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:50:3a:c5:c6:b5} reservation:<nil>}
	I1206 09:49:36.794464  248617 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a7c920}
	I1206 09:49:36.794479  248617 network_create.go:124] attempt to create docker network cert-options-117308 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 09:49:36.794536  248617 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-options-117308 cert-options-117308
	I1206 09:49:36.870681  248617 network_create.go:108] docker network cert-options-117308 192.168.85.0/24 created
	I1206 09:49:36.870702  248617 kic.go:121] calculated static IP "192.168.85.2" for the "cert-options-117308" container
	I1206 09:49:36.870784  248617 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:49:36.885550  248617 cli_runner.go:164] Run: docker volume create cert-options-117308 --label name.minikube.sigs.k8s.io=cert-options-117308 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:49:36.905425  248617 oci.go:103] Successfully created a docker volume cert-options-117308
	I1206 09:49:36.905493  248617 cli_runner.go:164] Run: docker run --rm --name cert-options-117308-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-117308 --entrypoint /usr/bin/test -v cert-options-117308:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:49:37.461335  248617 oci.go:107] Successfully prepared a docker volume cert-options-117308
	I1206 09:49:37.461395  248617 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:49:37.461403  248617 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:49:37.461472  248617 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-options-117308:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:49:41.477031  248617 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-options-117308:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.0155108s)
	I1206 09:49:41.477052  248617 kic.go:203] duration metric: took 4.015646063s to extract preloaded images to volume ...
	W1206 09:49:41.477195  248617 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 09:49:41.477298  248617 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:49:41.534426  248617 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-117308 --name cert-options-117308 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-117308 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-117308 --network cert-options-117308 --ip 192.168.85.2 --volume cert-options-117308:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:49:41.870354  248617 cli_runner.go:164] Run: docker container inspect cert-options-117308 --format={{.State.Running}}
	I1206 09:49:41.894647  248617 cli_runner.go:164] Run: docker container inspect cert-options-117308 --format={{.State.Status}}
	I1206 09:49:41.918183  248617 cli_runner.go:164] Run: docker exec cert-options-117308 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:49:41.974411  248617 oci.go:144] the created container "cert-options-117308" has a running status.
	I1206 09:49:41.974430  248617 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/cert-options-117308/id_rsa...
	I1206 09:49:42.459328  248617 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/cert-options-117308/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:49:42.481616  248617 cli_runner.go:164] Run: docker container inspect cert-options-117308 --format={{.State.Status}}
	I1206 09:49:42.506757  248617 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:49:42.506768  248617 kic_runner.go:114] Args: [docker exec --privileged cert-options-117308 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:49:42.567129  248617 cli_runner.go:164] Run: docker container inspect cert-options-117308 --format={{.State.Status}}
	I1206 09:49:42.586288  248617 machine.go:94] provisionDockerMachine start ...
	I1206 09:49:42.586387  248617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-117308
	I1206 09:49:42.605634  248617 main.go:143] libmachine: Using SSH client type: native
	I1206 09:49:42.605956  248617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1206 09:49:42.605962  248617 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:49:42.606618  248617 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42662->127.0.0.1:33053: read: connection reset by peer
	I1206 09:49:45.763019  248617 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-options-117308
	
	I1206 09:49:45.763033  248617 ubuntu.go:182] provisioning hostname "cert-options-117308"
	I1206 09:49:45.763097  248617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-117308
	I1206 09:49:45.781142  248617 main.go:143] libmachine: Using SSH client type: native
	I1206 09:49:45.781469  248617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1206 09:49:45.781478  248617 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-options-117308 && echo "cert-options-117308" | sudo tee /etc/hostname
	I1206 09:49:45.942848  248617 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-options-117308
	
	I1206 09:49:45.942917  248617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-117308
	I1206 09:49:45.961658  248617 main.go:143] libmachine: Using SSH client type: native
	I1206 09:49:45.961964  248617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1206 09:49:45.961978  248617 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-117308' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-117308/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-117308' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:49:46.115756  248617 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:49:46.115771  248617 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 09:49:46.115807  248617 ubuntu.go:190] setting up certificates
	I1206 09:49:46.115817  248617 provision.go:84] configureAuth start
	I1206 09:49:46.115880  248617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-117308
	I1206 09:49:46.133170  248617 provision.go:143] copyHostCerts
	I1206 09:49:46.133231  248617 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 09:49:46.133238  248617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 09:49:46.133320  248617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 09:49:46.133422  248617 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 09:49:46.133426  248617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 09:49:46.133452  248617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 09:49:46.133506  248617 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 09:49:46.133515  248617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 09:49:46.133538  248617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 09:49:46.133582  248617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.cert-options-117308 san=[127.0.0.1 192.168.85.2 cert-options-117308 localhost minikube]
	I1206 09:49:46.692809  248617 provision.go:177] copyRemoteCerts
	I1206 09:49:46.692871  248617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:49:46.692926  248617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-117308
	I1206 09:49:46.710871  248617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/cert-options-117308/id_rsa Username:docker}
	I1206 09:49:46.815303  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 09:49:46.834673  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1206 09:49:46.854577  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:49:46.872923  248617 provision.go:87] duration metric: took 757.083272ms to configureAuth
	I1206 09:49:46.872942  248617 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:49:46.873144  248617 config.go:182] Loaded profile config "cert-options-117308": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:49:46.873149  248617 machine.go:97] duration metric: took 4.286851887s to provisionDockerMachine
	I1206 09:49:46.873155  248617 client.go:176] duration metric: took 10.1351499s to LocalClient.Create
	I1206 09:49:46.873174  248617 start.go:167] duration metric: took 10.135205138s to libmachine.API.Create "cert-options-117308"
	I1206 09:49:46.873180  248617 start.go:293] postStartSetup for "cert-options-117308" (driver="docker")
	I1206 09:49:46.873188  248617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:49:46.873239  248617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:49:46.873277  248617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-117308
	I1206 09:49:46.890610  248617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/cert-options-117308/id_rsa Username:docker}
	I1206 09:49:46.996741  248617 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:49:47.002007  248617 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:49:47.002029  248617 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:49:47.002039  248617 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 09:49:47.002108  248617 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 09:49:47.002192  248617 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 09:49:47.002308  248617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:49:47.012541  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:49:47.035536  248617 start.go:296] duration metric: took 162.342043ms for postStartSetup
	I1206 09:49:47.035912  248617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-117308
	I1206 09:49:47.053343  248617 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/config.json ...
	I1206 09:49:47.053614  248617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:49:47.053653  248617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-117308
	I1206 09:49:47.070631  248617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/cert-options-117308/id_rsa Username:docker}
	I1206 09:49:47.176966  248617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:49:47.182080  248617 start.go:128] duration metric: took 10.447956403s to createHost
	I1206 09:49:47.182095  248617 start.go:83] releasing machines lock for "cert-options-117308", held for 10.448077421s
	I1206 09:49:47.182178  248617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-117308
	I1206 09:49:47.199525  248617 ssh_runner.go:195] Run: cat /version.json
	I1206 09:49:47.199567  248617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-117308
	I1206 09:49:47.199589  248617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:49:47.199649  248617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-117308
	I1206 09:49:47.224960  248617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/cert-options-117308/id_rsa Username:docker}
	I1206 09:49:47.241577  248617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/cert-options-117308/id_rsa Username:docker}
	I1206 09:49:47.425081  248617 ssh_runner.go:195] Run: systemctl --version
	I1206 09:49:47.431702  248617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:49:47.436080  248617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:49:47.436148  248617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:49:47.464607  248617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 09:49:47.464619  248617 start.go:496] detecting cgroup driver to use...
	I1206 09:49:47.464651  248617 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 09:49:47.464698  248617 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:49:47.479719  248617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:49:47.493012  248617 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:49:47.493067  248617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:49:47.512580  248617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:49:47.531837  248617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:49:47.644710  248617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:49:47.767717  248617 docker.go:234] disabling docker service ...
	I1206 09:49:47.767780  248617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:49:47.792413  248617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:49:47.806014  248617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:49:47.927272  248617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:49:48.047187  248617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:49:48.060805  248617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:49:48.075563  248617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:49:48.085030  248617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:49:48.094687  248617 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:49:48.094766  248617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:49:48.104062  248617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:49:48.113575  248617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:49:48.122599  248617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:49:48.131753  248617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:49:48.140186  248617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:49:48.148848  248617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:49:48.157852  248617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:49:48.166687  248617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:49:48.174248  248617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:49:48.181699  248617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:49:48.298048  248617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:49:48.436830  248617 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:49:48.436890  248617 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:49:48.440821  248617 start.go:564] Will wait 60s for crictl version
	I1206 09:49:48.440876  248617 ssh_runner.go:195] Run: which crictl
	I1206 09:49:48.444457  248617 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:49:48.471214  248617 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 09:49:48.471271  248617 ssh_runner.go:195] Run: containerd --version
	I1206 09:49:48.498262  248617 ssh_runner.go:195] Run: containerd --version
	I1206 09:49:48.529217  248617 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1206 09:49:48.532387  248617 cli_runner.go:164] Run: docker network inspect cert-options-117308 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:49:48.548710  248617 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:49:48.552368  248617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:49:48.562025  248617 kubeadm.go:884] updating cluster {Name:cert-options-117308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-117308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:49:48.562137  248617 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:49:48.562200  248617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:49:48.591730  248617 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:49:48.591743  248617 containerd.go:534] Images already preloaded, skipping extraction
	I1206 09:49:48.591801  248617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:49:48.617420  248617 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:49:48.617432  248617 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:49:48.617440  248617 kubeadm.go:935] updating node { 192.168.85.2 8555 v1.34.2 containerd true true} ...
	I1206 09:49:48.617543  248617 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-117308 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-options-117308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:49:48.617614  248617 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:49:48.645501  248617 cni.go:84] Creating CNI manager for ""
	I1206 09:49:48.645512  248617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:49:48.645532  248617 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:49:48.645554  248617 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8555 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-117308 NodeName:cert-options-117308 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:49:48.645668  248617 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "cert-options-117308"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:49:48.645734  248617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:49:48.653775  248617 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:49:48.653838  248617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:49:48.661809  248617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1206 09:49:48.675138  248617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:49:48.688564  248617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1206 09:49:48.702011  248617 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:49:48.705993  248617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:49:48.716390  248617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:49:48.830057  248617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:49:48.847969  248617 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308 for IP: 192.168.85.2
	I1206 09:49:48.847979  248617 certs.go:195] generating shared ca certs ...
	I1206 09:49:48.847994  248617 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:48.848149  248617 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 09:49:48.848191  248617 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 09:49:48.848196  248617 certs.go:257] generating profile certs ...
	I1206 09:49:48.848255  248617 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/client.key
	I1206 09:49:48.848265  248617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/client.crt with IP's: []
	I1206 09:49:49.043244  248617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/client.crt ...
	I1206 09:49:49.043260  248617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/client.crt: {Name:mkb5dcc3acd0706b9ce2a42215b93640a2eba00d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:49.043480  248617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/client.key ...
	I1206 09:49:49.043486  248617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/client.key: {Name:mk17d74d598f44bd1b785c9b2e75d5bf4fa3997d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:49.043585  248617 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.key.b04b855b
	I1206 09:49:49.043596  248617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.crt.b04b855b with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 09:49:49.178904  248617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.crt.b04b855b ...
	I1206 09:49:49.178923  248617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.crt.b04b855b: {Name:mk5ad3f04beab12f92587aa3db6c4a79d95f4a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:49.179111  248617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.key.b04b855b ...
	I1206 09:49:49.179118  248617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.key.b04b855b: {Name:mkc2da5c879b1a127c0d65f28194549abccef7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:49.179207  248617 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.crt.b04b855b -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.crt
	I1206 09:49:49.179280  248617 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.key.b04b855b -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.key
	I1206 09:49:49.179367  248617 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/proxy-client.key
	I1206 09:49:49.179403  248617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/proxy-client.crt with IP's: []
	I1206 09:49:49.254441  248617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/proxy-client.crt ...
	I1206 09:49:49.254457  248617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/proxy-client.crt: {Name:mkd2397ee970675de892664409a742be6864063b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:49.254675  248617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/proxy-client.key ...
	I1206 09:49:49.254684  248617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/proxy-client.key: {Name:mkaa852d382d30cef612788fbc515fb6901147ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:49:49.254886  248617 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 09:49:49.254926  248617 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 09:49:49.254934  248617 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:49:49.254960  248617 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 09:49:49.254985  248617 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:49:49.255019  248617 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 09:49:49.255062  248617 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:49:49.255711  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:49:49.274928  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:49:49.293931  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:49:49.312684  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:49:49.332092  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I1206 09:49:49.352044  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:49:49.370935  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:49:49.388781  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/cert-options-117308/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1206 09:49:49.406688  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 09:49:49.425662  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:49:49.443416  248617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 09:49:49.461639  248617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:49:49.474593  248617 ssh_runner.go:195] Run: openssl version
	I1206 09:49:49.480999  248617 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 09:49:49.488669  248617 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 09:49:49.496421  248617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 09:49:49.500181  248617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 09:49:49.500234  248617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 09:49:49.541451  248617 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:49:49.549343  248617 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 09:49:49.557041  248617 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 09:49:49.564513  248617 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 09:49:49.572481  248617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 09:49:49.576577  248617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 09:49:49.576635  248617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 09:49:49.618328  248617 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:49:49.626105  248617 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:49:49.633637  248617 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:49:49.641524  248617 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:49:49.649406  248617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:49:49.653685  248617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:49:49.653750  248617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:49:49.695537  248617 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:49:49.703324  248617 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:49:49.711050  248617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:49:49.714743  248617 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:49:49.714786  248617 kubeadm.go:401] StartCluster: {Name:cert-options-117308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-117308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:49:49.715754  248617 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:49:49.715836  248617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:49:49.760819  248617 cri.go:89] found id: ""
	I1206 09:49:49.760883  248617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:49:49.770044  248617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:49:49.777805  248617 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:49:49.777858  248617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:49:49.787205  248617 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:49:49.787216  248617 kubeadm.go:158] found existing configuration files:
	
	I1206 09:49:49.787278  248617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I1206 09:49:49.795585  248617 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:49:49.795640  248617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:49:49.803147  248617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I1206 09:49:49.811091  248617 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:49:49.811149  248617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:49:49.818643  248617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I1206 09:49:49.826213  248617 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:49:49.826268  248617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:49:49.834162  248617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I1206 09:49:49.841972  248617 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:49:49.842030  248617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:49:49.849442  248617 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:49:49.917271  248617 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1206 09:49:49.917521  248617 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:49:49.996849  248617 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:49:58.299284  203272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001053633s
	I1206 09:49:58.299721  203272 kubeadm.go:319] 
	I1206 09:49:58.299814  203272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:49:58.299858  203272 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:49:58.299986  203272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:49:58.299997  203272 kubeadm.go:319] 
	I1206 09:49:58.300130  203272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:49:58.300171  203272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:49:58.300208  203272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 09:49:58.300213  203272 kubeadm.go:319] 
	I1206 09:49:58.306843  203272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:49:58.307315  203272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:49:58.307451  203272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:49:58.307752  203272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 09:49:58.307765  203272 kubeadm.go:319] 
	I1206 09:49:58.307854  203272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:49:58.307912  203272 kubeadm.go:403] duration metric: took 12m18.883948993s to StartCluster
	I1206 09:49:58.307955  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 09:49:58.308019  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 09:49:58.345982  203272 cri.go:89] found id: ""
	I1206 09:49:58.346005  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.346017  203272 logs.go:284] No container was found matching "kube-apiserver"
	I1206 09:49:58.346030  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 09:49:58.346089  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 09:49:58.395597  203272 cri.go:89] found id: ""
	I1206 09:49:58.395635  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.395644  203272 logs.go:284] No container was found matching "etcd"
	I1206 09:49:58.395650  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 09:49:58.395725  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 09:49:58.426022  203272 cri.go:89] found id: ""
	I1206 09:49:58.426043  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.426051  203272 logs.go:284] No container was found matching "coredns"
	I1206 09:49:58.426057  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 09:49:58.426112  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 09:49:58.456770  203272 cri.go:89] found id: ""
	I1206 09:49:58.456791  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.456799  203272 logs.go:284] No container was found matching "kube-scheduler"
	I1206 09:49:58.456806  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 09:49:58.456869  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 09:49:58.488515  203272 cri.go:89] found id: ""
	I1206 09:49:58.488539  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.488554  203272 logs.go:284] No container was found matching "kube-proxy"
	I1206 09:49:58.488560  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 09:49:58.488621  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 09:49:58.518890  203272 cri.go:89] found id: ""
	I1206 09:49:58.518912  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.518920  203272 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 09:49:58.518927  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 09:49:58.518993  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 09:49:58.549882  203272 cri.go:89] found id: ""
	I1206 09:49:58.549962  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.549985  203272 logs.go:284] No container was found matching "kindnet"
	I1206 09:49:58.550024  203272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 09:49:58.550134  203272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 09:49:58.580252  203272 cri.go:89] found id: ""
	I1206 09:49:58.580326  203272 logs.go:282] 0 containers: []
	W1206 09:49:58.580350  203272 logs.go:284] No container was found matching "storage-provisioner"
	I1206 09:49:58.580372  203272 logs.go:123] Gathering logs for describe nodes ...
	I1206 09:49:58.580417  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 09:49:58.679112  203272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 09:49:58.679135  203272 logs.go:123] Gathering logs for containerd ...
	I1206 09:49:58.679147  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 09:49:58.731340  203272 logs.go:123] Gathering logs for container status ...
	I1206 09:49:58.731440  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 09:49:58.767161  203272 logs.go:123] Gathering logs for kubelet ...
	I1206 09:49:58.767186  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 09:49:58.833114  203272 logs.go:123] Gathering logs for dmesg ...
	I1206 09:49:58.833189  203272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 09:49:58.847191  203272 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001053633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 09:49:58.847337  203272 out.go:285] * 
	W1206 09:49:58.847451  203272 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001053633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:49:58.847509  203272 out.go:285] * 
	W1206 09:49:58.850373  203272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 09:49:58.857718  203272 out.go:203] 
	W1206 09:49:58.860773  203272 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001053633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 09:49:58.861136  203272 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 09:49:58.861242  203272 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 09:49:58.864280  203272 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 09:41:51 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:51.553512427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:41:51 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:51.554764816Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.515208277s"
	Dec 06 09:41:51 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:51.554813366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 06 09:41:51 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:51.555956888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 06 09:41:52 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:52.222365300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 06 09:41:52 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:52.224752948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 06 09:41:52 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:52.227185249Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 06 09:41:52 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:52.230754500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 06 09:41:52 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:52.231337293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 675.33708ms"
	Dec 06 09:41:52 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:52.231871125Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 06 09:41:52 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:52.233056436Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 09:41:54 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:54.263724198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:41:54 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:54.266535482Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21140371"
	Dec 06 09:41:54 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:54.268632222Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:41:54 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:54.274107408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:41:54 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:54.276435068Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 2.043336621s"
	Dec 06 09:41:54 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:41:54.276497222Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\""
	Dec 06 09:46:43 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:46:43.459902589Z" level=info msg="container event discarded" container=fc631bb5bbe41a50f0378f0c34cdfcf186d67bb06804d5650ee4747860182599 type=CONTAINER_DELETED_EVENT
	Dec 06 09:46:43 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:46:43.472643074Z" level=info msg="container event discarded" container=81119d53713b04f51629e80d9c76d5ed1b3abdf890f1f098b9fb9a2b42dd9330 type=CONTAINER_DELETED_EVENT
	Dec 06 09:46:43 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:46:43.472712630Z" level=info msg="container event discarded" container=3e1c61d3384bc4122158497579ed2d4dbcece7bb5c315e5d918ee56678fa0841 type=CONTAINER_DELETED_EVENT
	Dec 06 09:46:43 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:46:43.486963974Z" level=info msg="container event discarded" container=0245baf967fb7d97958c5e0cdaa90489f0bcbe0d555c393910b3d81721631a59 type=CONTAINER_DELETED_EVENT
	Dec 06 09:46:43 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:46:43.498251389Z" level=info msg="container event discarded" container=9e7ab5e70ab455fe5276e7f9464007280a33621c3659572187707ed58d40fc21 type=CONTAINER_DELETED_EVENT
	Dec 06 09:46:43 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:46:43.498315710Z" level=info msg="container event discarded" container=e7d6ada35652a05f990a9094a0f4be278f1eaaa0814f0d4dfe0e396bd7b8c140 type=CONTAINER_DELETED_EVENT
	Dec 06 09:46:43 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:46:43.514543182Z" level=info msg="container event discarded" container=d9958e90490b7654871392f24d0429f19ff020be5677f638d69ba99cce936201 type=CONTAINER_DELETED_EVENT
	Dec 06 09:46:43 kubernetes-upgrade-228904 containerd[557]: time="2025-12-06T09:46:43.514599224Z" level=info msg="container event discarded" container=2947d333d4573c376094864a09dfa002c9fe6e29f94734a40031256567454023 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 09:50:01 up  1:32,  0 user,  load average: 1.67, 1.61, 2.07
	Linux kubernetes-upgrade-228904 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 09:49:58 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:49:58 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 09:49:58 kubernetes-upgrade-228904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:49:58 kubernetes-upgrade-228904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:49:59 kubernetes-upgrade-228904 kubelet[14471]: E1206 09:49:59.125243   14471 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:49:59 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:49:59 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:49:59 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 09:49:59 kubernetes-upgrade-228904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:49:59 kubernetes-upgrade-228904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:50:00 kubernetes-upgrade-228904 kubelet[14477]: E1206 09:50:00.049008   14477 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:50:00 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:50:00 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:50:00 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 06 09:50:00 kubernetes-upgrade-228904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:50:00 kubernetes-upgrade-228904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:50:00 kubernetes-upgrade-228904 kubelet[14498]: E1206 09:50:00.905805   14498 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:50:00 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:50:00 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 09:50:01 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 06 09:50:01 kubernetes-upgrade-228904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:50:01 kubernetes-upgrade-228904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 09:50:01 kubernetes-upgrade-228904 kubelet[14589]: E1206 09:50:01.833433   14589 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 09:50:01 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 09:50:01 kubernetes-upgrade-228904 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-228904 -n kubernetes-upgrade-228904
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-228904 -n kubernetes-upgrade-228904: exit status 2 (479.872172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-228904" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-228904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-228904
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-228904: (2.700312792s)
--- FAIL: TestKubernetesUpgrade (807.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (510.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 09:52:57.332692    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m28.986755416s)

                                                
                                                
-- stdout --
	* [no-preload-257359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-257359" primary control-plane node in "no-preload-257359" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:52:25.494212  265222 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:52:25.498162  265222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:52:25.498212  265222 out.go:374] Setting ErrFile to fd 2...
	I1206 09:52:25.498234  265222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:52:25.498529  265222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:52:25.499010  265222 out.go:368] Setting JSON to false
	I1206 09:52:25.500160  265222 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5697,"bootTime":1765009049,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:52:25.500267  265222 start.go:143] virtualization:  
	I1206 09:52:25.504361  265222 out.go:179] * [no-preload-257359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:52:25.507871  265222 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:52:25.507960  265222 notify.go:221] Checking for updates...
	I1206 09:52:25.512124  265222 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:52:25.515299  265222 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:52:25.518371  265222 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:52:25.521580  265222 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:52:25.524777  265222 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:52:25.528400  265222 config.go:182] Loaded profile config "embed-certs-100767": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:52:25.528517  265222 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:52:25.584961  265222 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:52:25.585089  265222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:52:25.718335  265222 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 09:52:25.703792627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:52:25.718442  265222 docker.go:319] overlay module found
	I1206 09:52:25.722085  265222 out.go:179] * Using the docker driver based on user configuration
	I1206 09:52:25.725424  265222 start.go:309] selected driver: docker
	I1206 09:52:25.725445  265222 start.go:927] validating driver "docker" against <nil>
	I1206 09:52:25.725473  265222 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:52:25.726199  265222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:52:25.830860  265222 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 09:52:25.817424037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:52:25.831021  265222 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:52:25.831252  265222 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:52:25.833677  265222 out.go:179] * Using Docker driver with root privileges
	I1206 09:52:25.836623  265222 cni.go:84] Creating CNI manager for ""
	I1206 09:52:25.836703  265222 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:52:25.836714  265222 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:52:25.836797  265222 start.go:353] cluster config:
	{Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:25.839946  265222 out.go:179] * Starting "no-preload-257359" primary control-plane node in "no-preload-257359" cluster
	I1206 09:52:25.842829  265222 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 09:52:25.845761  265222 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:52:25.848678  265222 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:52:25.848817  265222 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/config.json ...
	I1206 09:52:25.848849  265222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/config.json: {Name:mkb0a38d36182c982de0b0e01a13c0a52e323729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:25.849034  265222 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:52:25.849196  265222 cache.go:107] acquiring lock: {Name:mkad35cce177b57f018574c39ee8c3c239eb9b07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:25.849249  265222 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 09:52:25.849265  265222 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 66.577µs
	I1206 09:52:25.849275  265222 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 09:52:25.849286  265222 cache.go:107] acquiring lock: {Name:mk51ddffc8cf367c8f9ab9dab46cca9425ce4f0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:25.849354  265222 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:52:25.849523  265222 cache.go:107] acquiring lock: {Name:mkdb80297b5c34ff2c59c7d0547bc50e4c902573 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:25.849725  265222 cache.go:107] acquiring lock: {Name:mk507200c1f46ea68c0c2896fa231924d660663f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:25.849893  265222 cache.go:107] acquiring lock: {Name:mk5bfca67d26458a19d81fb604def77746df1eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:25.850128  265222 cache.go:107] acquiring lock: {Name:mk5d1295ea377d97f7962ba416aea9d5b2908db5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:25.850267  265222 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:52:25.850431  265222 cache.go:107] acquiring lock: {Name:mkf308199b47415a211213857d6d1bca152d3eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:25.850482  265222 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 09:52:25.850494  265222 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 66.823µs
	I1206 09:52:25.850501  265222 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 09:52:25.850512  265222 cache.go:107] acquiring lock: {Name:mk2939303cfab712d7c12da37ef89ab2271b37f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:25.850578  265222 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:52:25.850763  265222 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:52:25.850873  265222 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1206 09:52:25.850889  265222 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 768.951µs
	I1206 09:52:25.850897  265222 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 09:52:25.851645  265222 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:52:25.853113  265222 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:52:25.854040  265222 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:52:25.854244  265222 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:52:25.854405  265222 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:52:25.854695  265222 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:52:25.889053  265222 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:52:25.889082  265222 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:52:25.889096  265222 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:52:25.889125  265222 start.go:360] acquireMachinesLock for no-preload-257359: {Name:mk6d92dd7ed626ac67dff0eb9c6415617a7c299c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:52:25.889234  265222 start.go:364] duration metric: took 94.392µs to acquireMachinesLock for "no-preload-257359"
	I1206 09:52:25.889258  265222 start.go:93] Provisioning new machine with config: &{Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:52:25.889325  265222 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:52:25.892634  265222 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:52:25.892864  265222 start.go:159] libmachine.API.Create for "no-preload-257359" (driver="docker")
	I1206 09:52:25.892891  265222 client.go:173] LocalClient.Create starting
	I1206 09:52:25.892949  265222 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 09:52:25.892986  265222 main.go:143] libmachine: Decoding PEM data...
	I1206 09:52:25.893002  265222 main.go:143] libmachine: Parsing certificate...
	I1206 09:52:25.893057  265222 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 09:52:25.893076  265222 main.go:143] libmachine: Decoding PEM data...
	I1206 09:52:25.893090  265222 main.go:143] libmachine: Parsing certificate...
	I1206 09:52:25.893599  265222 cli_runner.go:164] Run: docker network inspect no-preload-257359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:52:25.918404  265222 cli_runner.go:211] docker network inspect no-preload-257359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:52:25.918485  265222 network_create.go:284] running [docker network inspect no-preload-257359] to gather additional debugging logs...
	I1206 09:52:25.918502  265222 cli_runner.go:164] Run: docker network inspect no-preload-257359
	W1206 09:52:25.942124  265222 cli_runner.go:211] docker network inspect no-preload-257359 returned with exit code 1
	I1206 09:52:25.942162  265222 network_create.go:287] error running [docker network inspect no-preload-257359]: docker network inspect no-preload-257359: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-257359 not found
	I1206 09:52:25.942174  265222 network_create.go:289] output of [docker network inspect no-preload-257359]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-257359 not found
	
	** /stderr **
	I1206 09:52:25.942266  265222 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:52:25.973347  265222 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
	I1206 09:52:25.973797  265222 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6479799cc46a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:b3:f8:bd:10:a1} reservation:<nil>}
	I1206 09:52:25.974208  265222 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-045bb1cdddf9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:c6:f0:a4:f5:8d} reservation:<nil>}
	I1206 09:52:25.974674  265222 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c01a10}
	I1206 09:52:25.974700  265222 network_create.go:124] attempt to create docker network no-preload-257359 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1206 09:52:25.974763  265222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-257359 no-preload-257359
	I1206 09:52:26.074661  265222 network_create.go:108] docker network no-preload-257359 192.168.76.0/24 created
	I1206 09:52:26.074743  265222 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-257359" container
	I1206 09:52:26.074852  265222 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:52:26.098502  265222 cli_runner.go:164] Run: docker volume create no-preload-257359 --label name.minikube.sigs.k8s.io=no-preload-257359 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:52:26.143732  265222 oci.go:103] Successfully created a docker volume no-preload-257359
	I1206 09:52:26.143813  265222 cli_runner.go:164] Run: docker run --rm --name no-preload-257359-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-257359 --entrypoint /usr/bin/test -v no-preload-257359:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:52:26.161513  265222 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 09:52:26.178125  265222 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1206 09:52:26.199064  265222 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 09:52:26.219160  265222 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 09:52:26.225270  265222 cache.go:162] opening:  /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 09:52:26.578144  265222 cache.go:157] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 09:52:26.578211  265222 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 728.318742ms
	I1206 09:52:26.578240  265222 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 09:52:27.146514  265222 cache.go:157] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 09:52:27.146539  265222 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.296816626s
	I1206 09:52:27.146552  265222 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 09:52:27.168704  265222 cli_runner.go:217] Completed: docker run --rm --name no-preload-257359-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-257359 --entrypoint /usr/bin/test -v no-preload-257359:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (1.024841934s)
	I1206 09:52:27.168875  265222 oci.go:107] Successfully prepared a docker volume no-preload-257359
	I1206 09:52:27.168966  265222 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1206 09:52:27.176409  265222 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 09:52:27.176521  265222 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:52:27.215149  265222 cache.go:157] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 09:52:27.215225  265222 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.365702514s
	I1206 09:52:27.215254  265222 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 09:52:27.216448  265222 cache.go:157] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 09:52:27.216508  265222 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.367220601s
	I1206 09:52:27.216533  265222 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 09:52:27.226371  265222 cache.go:157] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 09:52:27.226443  265222 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.375929271s
	I1206 09:52:27.226470  265222 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 09:52:27.226515  265222 cache.go:87] Successfully saved all images to host disk.
	I1206 09:52:27.309933  265222 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-257359 --name no-preload-257359 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-257359 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-257359 --network no-preload-257359 --ip 192.168.76.2 --volume no-preload-257359:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:52:27.833308  265222 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Running}}
	I1206 09:52:27.869959  265222 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 09:52:27.899849  265222 cli_runner.go:164] Run: docker exec no-preload-257359 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:52:27.971009  265222 oci.go:144] the created container "no-preload-257359" has a running status.
	I1206 09:52:27.971042  265222 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa...
	I1206 09:52:28.404299  265222 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:52:28.432449  265222 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 09:52:28.462544  265222 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:52:28.462568  265222 kic_runner.go:114] Args: [docker exec --privileged no-preload-257359 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:52:28.530030  265222 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 09:52:28.560955  265222 machine.go:94] provisionDockerMachine start ...
	I1206 09:52:28.561054  265222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 09:52:28.588298  265222 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:28.588700  265222 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1206 09:52:28.588717  265222 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:52:28.589434  265222 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48896->127.0.0.1:33078: read: connection reset by peer
	I1206 09:52:31.743146  265222 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-257359
	
	I1206 09:52:31.743170  265222 ubuntu.go:182] provisioning hostname "no-preload-257359"
	I1206 09:52:31.743234  265222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 09:52:31.763132  265222 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:31.763501  265222 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1206 09:52:31.763515  265222 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-257359 && echo "no-preload-257359" | sudo tee /etc/hostname
	I1206 09:52:31.982458  265222 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-257359
	
	I1206 09:52:31.982557  265222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 09:52:32.029245  265222 main.go:143] libmachine: Using SSH client type: native
	I1206 09:52:32.029578  265222 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1206 09:52:32.029600  265222 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-257359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-257359/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-257359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:52:32.200455  265222 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:52:32.200483  265222 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 09:52:32.200514  265222 ubuntu.go:190] setting up certificates
	I1206 09:52:32.200524  265222 provision.go:84] configureAuth start
	I1206 09:52:32.200584  265222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 09:52:32.229469  265222 provision.go:143] copyHostCerts
	I1206 09:52:32.229538  265222 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 09:52:32.229552  265222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 09:52:32.229628  265222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 09:52:32.229727  265222 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 09:52:32.229738  265222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 09:52:32.229765  265222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 09:52:32.229824  265222 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 09:52:32.229833  265222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 09:52:32.229859  265222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 09:52:32.229930  265222 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.no-preload-257359 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-257359]
	I1206 09:52:32.442005  265222 provision.go:177] copyRemoteCerts
	I1206 09:52:32.442082  265222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:52:32.442129  265222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 09:52:32.473404  265222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 09:52:32.584222  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:52:32.617065  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 09:52:32.638779  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:52:32.661700  265222 provision.go:87] duration metric: took 461.15201ms to configureAuth
	I1206 09:52:32.661746  265222 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:52:32.661933  265222 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:52:32.661957  265222 machine.go:97] duration metric: took 4.100978327s to provisionDockerMachine
	I1206 09:52:32.661965  265222 client.go:176] duration metric: took 6.769068355s to LocalClient.Create
	I1206 09:52:32.661979  265222 start.go:167] duration metric: took 6.769116693s to libmachine.API.Create "no-preload-257359"
	I1206 09:52:32.661992  265222 start.go:293] postStartSetup for "no-preload-257359" (driver="docker")
	I1206 09:52:32.662003  265222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:52:32.662057  265222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:52:32.662100  265222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 09:52:32.688424  265222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 09:52:32.812102  265222 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:52:32.815748  265222 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:52:32.815817  265222 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:52:32.815843  265222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 09:52:32.815931  265222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 09:52:32.816064  265222 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 09:52:32.816226  265222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:52:32.827849  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:52:32.851155  265222 start.go:296] duration metric: took 189.149408ms for postStartSetup
	I1206 09:52:32.851595  265222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 09:52:32.883672  265222 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/config.json ...
	I1206 09:52:32.883925  265222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:52:32.883980  265222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 09:52:32.902991  265222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 09:52:33.007836  265222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:52:33.018047  265222 start.go:128] duration metric: took 7.128706267s to createHost
	I1206 09:52:33.018077  265222 start.go:83] releasing machines lock for "no-preload-257359", held for 7.128833383s
	I1206 09:52:33.018163  265222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 09:52:33.039783  265222 ssh_runner.go:195] Run: cat /version.json
	I1206 09:52:33.039836  265222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 09:52:33.040070  265222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:52:33.040130  265222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 09:52:33.079268  265222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 09:52:33.087027  265222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 09:52:33.199332  265222 ssh_runner.go:195] Run: systemctl --version
	I1206 09:52:33.305368  265222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:52:33.309802  265222 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:52:33.309923  265222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:52:33.349133  265222 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 09:52:33.349169  265222 start.go:496] detecting cgroup driver to use...
	I1206 09:52:33.349210  265222 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 09:52:33.349273  265222 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:52:33.366713  265222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:52:33.383722  265222 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:52:33.383803  265222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:52:33.402302  265222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:52:33.431764  265222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:52:33.619698  265222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:52:33.797979  265222 docker.go:234] disabling docker service ...
	I1206 09:52:33.798097  265222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:52:33.835639  265222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:52:33.859766  265222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:52:34.025211  265222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:52:34.211834  265222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:52:34.226594  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:52:34.242117  265222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:52:34.252166  265222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:52:34.262202  265222 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:52:34.262348  265222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:52:34.272455  265222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:52:34.282663  265222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:52:34.293148  265222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:52:34.302786  265222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:52:34.312340  265222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:52:34.321770  265222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:52:34.342262  265222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:52:34.358295  265222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:52:34.369970  265222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:52:34.381305  265222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:34.543556  265222 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:52:34.670186  265222 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:52:34.670306  265222 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:52:34.674873  265222 start.go:564] Will wait 60s for crictl version
	I1206 09:52:34.675015  265222 ssh_runner.go:195] Run: which crictl
	I1206 09:52:34.682333  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:52:34.725663  265222 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 09:52:34.725784  265222 ssh_runner.go:195] Run: containerd --version
	I1206 09:52:34.747225  265222 ssh_runner.go:195] Run: containerd --version
	I1206 09:52:34.777267  265222 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 09:52:34.780675  265222 cli_runner.go:164] Run: docker network inspect no-preload-257359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:52:34.800117  265222 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 09:52:34.805349  265222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:52:34.822236  265222 kubeadm.go:884] updating cluster {Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:52:34.822361  265222 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:52:34.822417  265222 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:52:34.863167  265222 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 09:52:34.863190  265222 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 09:52:34.863228  265222 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:34.863481  265222 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:52:34.863580  265222 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:52:34.863675  265222 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:52:34.863767  265222 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:52:34.863849  265222 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1206 09:52:34.864003  265222 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:52:34.864157  265222 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:52:34.867151  265222 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:52:34.867461  265222 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:52:34.867536  265222 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:52:34.867619  265222 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:52:34.867755  265222 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1206 09:52:34.867806  265222 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:52:34.867886  265222 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:52:34.867755  265222 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:35.082113  265222 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1206 09:52:35.082203  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:52:35.110085  265222 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1206 09:52:35.110169  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:52:35.132384  265222 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1206 09:52:35.132467  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:52:35.146233  265222 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1206 09:52:35.146474  265222 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:52:35.146570  265222 ssh_runner.go:195] Run: which crictl
	I1206 09:52:35.146392  265222 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1206 09:52:35.146688  265222 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:52:35.146734  265222 ssh_runner.go:195] Run: which crictl
	I1206 09:52:35.148264  265222 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1206 09:52:35.148348  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1206 09:52:35.172712  265222 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1206 09:52:35.172807  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:52:35.173002  265222 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1206 09:52:35.173040  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:52:35.178766  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:52:35.178848  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:52:35.178894  265222 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1206 09:52:35.178968  265222 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:52:35.179011  265222 ssh_runner.go:195] Run: which crictl
	I1206 09:52:35.199804  265222 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1206 09:52:35.199900  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1206 09:52:35.211463  265222 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1206 09:52:35.211564  265222 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1206 09:52:35.211643  265222 ssh_runner.go:195] Run: which crictl
	I1206 09:52:35.322439  265222 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1206 09:52:35.322621  265222 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:52:35.322700  265222 ssh_runner.go:195] Run: which crictl
	I1206 09:52:35.322549  265222 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1206 09:52:35.322811  265222 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:52:35.322848  265222 ssh_runner.go:195] Run: which crictl
	I1206 09:52:35.337168  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:52:35.337336  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:52:35.337425  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:52:35.337539  265222 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1206 09:52:35.337585  265222 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1206 09:52:35.337624  265222 ssh_runner.go:195] Run: which crictl
	I1206 09:52:35.337710  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:52:35.342210  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:52:35.342619  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:52:35.474011  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:52:35.474126  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:52:35.474279  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 09:52:35.474203  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 09:52:35.474342  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:52:35.514291  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:52:35.514447  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:52:35.668913  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:52:35.668977  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 09:52:35.669029  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 09:52:35.669167  265222 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 09:52:35.669215  265222 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 09:52:35.669299  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1206 09:52:35.669410  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 09:52:35.699690  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 09:52:35.699789  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 09:52:35.831511  265222 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1206 09:52:35.831561  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1206 09:52:35.831661  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 09:52:35.831733  265222 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1206 09:52:35.831807  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1206 09:52:35.831876  265222 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1206 09:52:35.831933  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 09:52:35.831986  265222 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1206 09:52:35.832012  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1206 09:52:35.900741  265222 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 09:52:35.900858  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 09:52:35.900916  265222 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 09:52:35.900960  265222 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1206 09:52:35.900978  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1206 09:52:35.901073  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 09:52:35.964622  265222 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1206 09:52:35.964676  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1206 09:52:35.964752  265222 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1206 09:52:35.964842  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	W1206 09:52:36.097505  265222 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1206 09:52:36.097636  265222 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1206 09:52:36.097717  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:36.104074  265222 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1206 09:52:36.104114  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1206 09:52:36.104158  265222 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1206 09:52:36.104175  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1206 09:52:36.111032  265222 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1206 09:52:36.111139  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1206 09:52:36.140765  265222 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1206 09:52:36.141397  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1206 09:52:36.253271  265222 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1206 09:52:36.253472  265222 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:36.253826  265222 ssh_runner.go:195] Run: which crictl
	I1206 09:52:36.579808  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:36.582421  265222 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1206 09:52:36.722539  265222 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 09:52:36.722618  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 09:52:36.767595  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:38.182183  265222 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.414542062s)
	I1206 09:52:38.182333  265222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:52:38.182437  265222 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.459800473s)
	I1206 09:52:38.182456  265222 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1206 09:52:38.182477  265222 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1206 09:52:38.182504  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1206 09:52:38.216032  265222 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 09:52:38.216138  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:52:39.286055  265222 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.103526829s)
	I1206 09:52:39.286082  265222 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1206 09:52:39.286100  265222 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 09:52:39.286151  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 09:52:39.286225  265222 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.070072272s)
	I1206 09:52:39.286243  265222 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1206 09:52:39.286268  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1206 09:52:40.353135  265222 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.066958115s)
	I1206 09:52:40.353207  265222 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1206 09:52:40.353247  265222 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 09:52:40.353317  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 09:52:41.425296  265222 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.071951522s)
	I1206 09:52:41.425323  265222 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1206 09:52:41.425343  265222 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:52:41.425388  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1206 09:52:42.926973  265222 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.501558881s)
	I1206 09:52:42.926996  265222 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1206 09:52:42.927015  265222 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 09:52:42.927071  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 09:52:44.061010  265222 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.133917049s)
	I1206 09:52:44.061033  265222 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1206 09:52:44.061052  265222 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:52:44.061121  265222 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1206 09:52:44.446666  265222 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 09:52:44.446697  265222 cache_images.go:125] Successfully loaded all cached images
	I1206 09:52:44.446702  265222 cache_images.go:94] duration metric: took 9.583498426s to LoadCachedImages
	I1206 09:52:44.446712  265222 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 09:52:44.446808  265222 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-257359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:52:44.446871  265222 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:52:44.480063  265222 cni.go:84] Creating CNI manager for ""
	I1206 09:52:44.480082  265222 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:52:44.480116  265222 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:52:44.480141  265222 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-257359 NodeName:no-preload-257359 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:52:44.480256  265222 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-257359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:52:44.480322  265222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:52:44.490750  265222 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1206 09:52:44.490815  265222 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:52:44.500772  265222 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1206 09:52:44.500874  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1206 09:52:44.501788  265222 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1206 09:52:44.502218  265222 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1206 09:52:44.506747  265222 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1206 09:52:44.506782  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1206 09:52:45.408037  265222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:52:45.431683  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1206 09:52:45.435938  265222 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1206 09:52:45.437710  265222 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1206 09:52:45.437895  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1206 09:52:45.451102  265222 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1206 09:52:45.451179  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1206 09:52:46.122316  265222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:52:46.132126  265222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 09:52:46.148097  265222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:52:46.163610  265222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 09:52:46.179737  265222 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:52:46.184575  265222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:52:46.196464  265222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:52:46.324994  265222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:52:46.348545  265222 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359 for IP: 192.168.76.2
	I1206 09:52:46.348568  265222 certs.go:195] generating shared ca certs ...
	I1206 09:52:46.348585  265222 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:46.348730  265222 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 09:52:46.348787  265222 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 09:52:46.348800  265222 certs.go:257] generating profile certs ...
	I1206 09:52:46.348854  265222 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/client.key
	I1206 09:52:46.348871  265222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/client.crt with IP's: []
	I1206 09:52:46.626207  265222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/client.crt ...
	I1206 09:52:46.626238  265222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/client.crt: {Name:mk7ae5dc838aab962ab6ef747976385f2ff6fd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:46.626472  265222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/client.key ...
	I1206 09:52:46.626487  265222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/client.key: {Name:mkd087cfe2a244d53006e28e8e18839af88cf7d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:46.626592  265222 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key.673fc286
	I1206 09:52:46.626612  265222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.crt.673fc286 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1206 09:52:46.947774  265222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.crt.673fc286 ...
	I1206 09:52:46.947805  265222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.crt.673fc286: {Name:mkbdd47f0d51174482860587ed249c7e698ba834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:46.947999  265222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key.673fc286 ...
	I1206 09:52:46.948014  265222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key.673fc286: {Name:mkc728167d621800f353207e9360a0595d5c43ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:46.948095  265222 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.crt.673fc286 -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.crt
	I1206 09:52:46.948180  265222 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key.673fc286 -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key
	I1206 09:52:46.948243  265222 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key
	I1206 09:52:46.948267  265222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.crt with IP's: []
	I1206 09:52:47.048199  265222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.crt ...
	I1206 09:52:47.048230  265222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.crt: {Name:mk399d29eeec84b168e8fd65ebe5a3c5c58a0419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:47.048404  265222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key ...
	I1206 09:52:47.048422  265222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key: {Name:mk26bb41020563c731ec8f8330ff1245f2cb9b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:52:47.048610  265222 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 09:52:47.048664  265222 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 09:52:47.048679  265222 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:52:47.048706  265222 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 09:52:47.048736  265222 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:52:47.048800  265222 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 09:52:47.048851  265222 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:52:47.049410  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:52:47.068753  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:52:47.089184  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:52:47.109310  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:52:47.137704  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:52:47.156075  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:52:47.174948  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:52:47.193893  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:52:47.219234  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 09:52:47.245291  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 09:52:47.266891  265222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:52:47.288735  265222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:52:47.307256  265222 ssh_runner.go:195] Run: openssl version
	I1206 09:52:47.315967  265222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 09:52:47.326723  265222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 09:52:47.335179  265222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 09:52:47.341097  265222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 09:52:47.341165  265222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 09:52:47.386489  265222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:52:47.394771  265222 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:52:47.402906  265222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:47.413754  265222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:52:47.422726  265222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:47.427363  265222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:47.427483  265222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:52:47.470846  265222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:52:47.479199  265222 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:52:47.487549  265222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 09:52:47.496911  265222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 09:52:47.505388  265222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 09:52:47.510046  265222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 09:52:47.510277  265222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 09:52:47.554933  265222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:52:47.564297  265222 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 09:52:47.575582  265222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:52:47.580584  265222 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:52:47.580649  265222 kubeadm.go:401] StartCluster: {Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:52:47.580731  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:52:47.580787  265222 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:52:47.610364  265222 cri.go:89] found id: ""
	I1206 09:52:47.610447  265222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:52:47.619969  265222 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:52:47.628489  265222 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:52:47.628572  265222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:52:47.637191  265222 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:52:47.637214  265222 kubeadm.go:158] found existing configuration files:
	
	I1206 09:52:47.637262  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:52:47.646642  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:52:47.646730  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:52:47.654667  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:52:47.663946  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:52:47.664048  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:52:47.671868  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:52:47.680472  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:52:47.680562  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:52:47.689389  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:52:47.700232  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:52:47.700320  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:52:47.710298  265222 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:52:47.854844  265222 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:52:47.855415  265222 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:52:47.935357  265222 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:56:52.131955  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 09:56:52.131990  265222 kubeadm.go:319] 
	I1206 09:56:52.132057  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:56:52.135086  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:52.135149  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:52.135269  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:52.135335  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:52.135398  265222 kubeadm.go:319] OS: Linux
	I1206 09:56:52.135462  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:52.135528  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:52.135580  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:52.135635  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:52.135687  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:52.135753  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:52.135820  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:52.135888  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:52.135938  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:52.136021  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:52.136130  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:52.136253  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:52.136339  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:52.141711  265222 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:52.141840  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:52.141916  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:52.141987  265222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:52.142053  265222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:52.142117  265222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:52.142167  265222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:52.142231  265222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:52.142358  265222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142411  265222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:52.142534  265222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142602  265222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:52.142665  265222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:52.142714  265222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:52.142774  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:52.142827  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:52.142886  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:52.142942  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:52.143007  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:52.143067  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:52.143146  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:52.143212  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:52.146063  265222 out.go:252]   - Booting up control plane ...
	I1206 09:56:52.146187  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:52.146272  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:52.146343  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:52.146451  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:52.146548  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:52.146656  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:52.146744  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:52.146786  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:52.146923  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:52.147038  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:56:52.147107  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000059514s
	I1206 09:56:52.147115  265222 kubeadm.go:319] 
	I1206 09:56:52.147172  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:56:52.147209  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:56:52.147316  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:56:52.147324  265222 kubeadm.go:319] 
	I1206 09:56:52.147528  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:56:52.147567  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:56:52.147602  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1206 09:56:52.147720  265222 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000059514s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000059514s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:56:52.147812  265222 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:56:52.147999  265222 kubeadm.go:319] 
	I1206 09:56:52.558950  265222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:56:52.574085  265222 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:52.574157  265222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:52.583215  265222 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:52.583237  265222 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:52.583290  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:52.592240  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:52.592330  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:52.601081  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:52.609915  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:52.609987  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:52.618677  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.627409  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:52.627476  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.635636  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:52.644224  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:52.644339  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:52.652667  265222 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:52.772328  265222 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:56:52.772790  265222 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:56:52.844974  265222 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:53.947913  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:53.947946  265222 kubeadm.go:319] 
	I1206 10:00:53.948017  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:00:53.951147  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:53.951214  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:53.951335  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:53.951432  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:53.951494  265222 kubeadm.go:319] OS: Linux
	I1206 10:00:53.951590  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:53.951656  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:53.951713  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:53.951772  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:53.951824  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:53.951883  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:53.951934  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:53.952003  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:53.952063  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:53.952142  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:53.952242  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:53.952340  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:53.952409  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:53.955466  265222 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:53.955575  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:53.955645  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:53.955732  265222 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:53.955795  265222 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:53.955896  265222 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:53.955964  265222 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:53.956029  265222 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:53.956091  265222 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:53.956171  265222 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:53.956285  265222 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:53.956334  265222 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:53.956402  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:53.956467  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:53.956544  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:53.956617  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:53.956688  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:53.956748  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:53.956848  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:53.956936  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:53.961787  265222 out.go:252]   - Booting up control plane ...
	I1206 10:00:53.961906  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:53.961995  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:53.962068  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:53.962176  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:53.962277  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:53.962386  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:53.962474  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:53.962516  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:53.962650  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:53.962758  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:53.962827  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001252323s
	I1206 10:00:53.962835  265222 kubeadm.go:319] 
	I1206 10:00:53.962892  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:53.962934  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:53.963049  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:53.963060  265222 kubeadm.go:319] 
	I1206 10:00:53.963164  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:53.963200  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:53.963233  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:53.963298  265222 kubeadm.go:403] duration metric: took 8m6.382652277s to StartCluster
	I1206 10:00:53.963352  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:00:53.963521  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:00:53.963616  265222 kubeadm.go:319] 
	I1206 10:00:53.989223  265222 cri.go:89] found id: ""
	I1206 10:00:53.989249  265222 logs.go:282] 0 containers: []
	W1206 10:00:53.989258  265222 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:00:53.989265  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:00:53.989329  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:00:54.028962  265222 cri.go:89] found id: ""
	I1206 10:00:54.029000  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.029010  265222 logs.go:284] No container was found matching "etcd"
	I1206 10:00:54.029026  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:00:54.029137  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:00:54.055727  265222 cri.go:89] found id: ""
	I1206 10:00:54.055751  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.055760  265222 logs.go:284] No container was found matching "coredns"
	I1206 10:00:54.055766  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:00:54.055826  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:00:54.086040  265222 cri.go:89] found id: ""
	I1206 10:00:54.086066  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.086080  265222 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:00:54.086088  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:00:54.086232  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:00:54.112094  265222 cri.go:89] found id: ""
	I1206 10:00:54.112119  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.112127  265222 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:00:54.112134  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:00:54.112192  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:00:54.141768  265222 cri.go:89] found id: ""
	I1206 10:00:54.141793  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.141802  265222 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:00:54.141808  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:00:54.141867  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:00:54.168313  265222 cri.go:89] found id: ""
	I1206 10:00:54.168338  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.168347  265222 logs.go:284] No container was found matching "kindnet"
	I1206 10:00:54.168357  265222 logs.go:123] Gathering logs for kubelet ...
	I1206 10:00:54.168368  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:00:54.224543  265222 logs.go:123] Gathering logs for dmesg ...
	I1206 10:00:54.224578  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:00:54.238829  265222 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:00:54.238859  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:00:54.301151  265222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:00:54.301174  265222 logs.go:123] Gathering logs for containerd ...
	I1206 10:00:54.301185  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:00:54.345045  265222 logs.go:123] Gathering logs for container status ...
	I1206 10:00:54.345077  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:00:54.376879  265222 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:54.376928  265222 out.go:285] * 
	* 
	W1206 10:00:54.376993  265222 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.377007  265222 out.go:285] * 
	* 
	W1206 10:00:54.379146  265222 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:00:54.386374  265222 out.go:203] 
	W1206 10:00:54.389309  265222 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.389364  265222 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:00:54.389414  265222 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:00:54.392565  265222 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-257359
helpers_test.go:243: (dbg) docker inspect no-preload-257359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	        "Created": "2025-12-06T09:52:27.333376101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:27.474519381Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hostname",
	        "HostsPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hosts",
	        "LogPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26-json.log",
	        "Name": "/no-preload-257359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-257359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-257359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	                "LowerDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-257359",
	                "Source": "/var/lib/docker/volumes/no-preload-257359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-257359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-257359",
	                "name.minikube.sigs.k8s.io": "no-preload-257359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9be8b5c820dd4c3fe37c75e77303bf5032a3f74d4c68aab4997b8f54cdf3a70",
	            "SandboxKey": "/var/run/docker/netns/b9be8b5c820d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-257359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:a5:2f:79:60:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b05bfbfa55363c82b2c20e75689dc6d905b9177d9ed6efb1bc4c663e65903cf4",
	                    "EndpointID": "37f42c3d2ab503584211eef52439f3c17e372039f5b35f15d09e7f8a0c022b40",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-257359",
	                        "76494ba86a40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359: exit status 6 (350.343381ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:00:54.890161  284690 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-257359 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p embed-certs-100767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p embed-certs-100767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ image   │ old-k8s-version-587884 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ pause   │ -p old-k8s-version-587884 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ unpause │ -p old-k8s-version-587884 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-507319                                                                                                                                                                                                                            │ disable-driver-mounts-507319 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ embed-certs-100767 image list --format=json                                                                                                                                                                                                                │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:56:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:56:12.381215  278643 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:56:12.381413  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381441  278643 out.go:374] Setting ErrFile to fd 2...
	I1206 09:56:12.381461  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381758  278643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:56:12.382257  278643 out.go:368] Setting JSON to false
	I1206 09:56:12.383240  278643 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5924,"bootTime":1765009049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:56:12.383355  278643 start.go:143] virtualization:  
	I1206 09:56:12.387258  278643 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:56:12.391484  278643 notify.go:221] Checking for updates...
	I1206 09:56:12.391496  278643 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:56:12.394851  278643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:56:12.398015  278643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:56:12.400990  278643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:56:12.403944  278643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:56:12.407028  278643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:56:12.410729  278643 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:12.410840  278643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:56:12.445065  278643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:56:12.445213  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.519754  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.507997479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.519868  278643 docker.go:319] overlay module found
	I1206 09:56:12.523177  278643 out.go:179] * Using the docker driver based on user configuration
	I1206 09:56:12.526466  278643 start.go:309] selected driver: docker
	I1206 09:56:12.526501  278643 start.go:927] validating driver "docker" against <nil>
	I1206 09:56:12.526518  278643 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:56:12.527486  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.593335  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.584358845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.593500  278643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 09:56:12.593524  278643 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 09:56:12.593752  278643 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:56:12.596647  278643 out.go:179] * Using Docker driver with root privileges
	I1206 09:56:12.599543  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:12.599621  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:12.599637  278643 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:56:12.599733  278643 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:12.602953  278643 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 09:56:12.605789  278643 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 09:56:12.608936  278643 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:56:12.611867  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:12.611918  278643 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 09:56:12.611946  278643 cache.go:65] Caching tarball of preloaded images
	I1206 09:56:12.611951  278643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:56:12.612037  278643 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 09:56:12.612047  278643 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 09:56:12.612154  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:12.612171  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json: {Name:mk449f962f0653f31dbbb03aed6f74703a91443a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:12.631940  278643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:56:12.631967  278643 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:56:12.631982  278643 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:56:12.632013  278643 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:56:12.632117  278643 start.go:364] duration metric: took 83.89µs to acquireMachinesLock for "newest-cni-387337"
	I1206 09:56:12.632148  278643 start.go:93] Provisioning new machine with config: &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:56:12.632223  278643 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:56:12.635711  278643 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:56:12.635957  278643 start.go:159] libmachine.API.Create for "newest-cni-387337" (driver="docker")
	I1206 09:56:12.635999  278643 client.go:173] LocalClient.Create starting
	I1206 09:56:12.636069  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 09:56:12.636109  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636134  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636197  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 09:56:12.636218  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636234  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636615  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:56:12.654202  278643 cli_runner.go:211] docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:56:12.654286  278643 network_create.go:284] running [docker network inspect newest-cni-387337] to gather additional debugging logs...
	I1206 09:56:12.654307  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337
	W1206 09:56:12.674169  278643 cli_runner.go:211] docker network inspect newest-cni-387337 returned with exit code 1
	I1206 09:56:12.674197  278643 network_create.go:287] error running [docker network inspect newest-cni-387337]: docker network inspect newest-cni-387337: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-387337 not found
	I1206 09:56:12.674213  278643 network_create.go:289] output of [docker network inspect newest-cni-387337]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-387337 not found
	
	** /stderr **
	I1206 09:56:12.674320  278643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:12.697162  278643 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
	I1206 09:56:12.697876  278643 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6479799cc46a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:b3:f8:bd:10:a1} reservation:<nil>}
	I1206 09:56:12.698630  278643 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-045bb1cdddf9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:c6:f0:a4:f5:8d} reservation:<nil>}
	I1206 09:56:12.699284  278643 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b05bfbfa5536 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:01:4f:ea:ac:91} reservation:<nil>}
	I1206 09:56:12.700138  278643 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5b80}
	I1206 09:56:12.700211  278643 network_create.go:124] attempt to create docker network newest-cni-387337 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 09:56:12.700393  278643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-387337 newest-cni-387337
	I1206 09:56:12.761289  278643 network_create.go:108] docker network newest-cni-387337 192.168.85.0/24 created
	I1206 09:56:12.761339  278643 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-387337" container
	I1206 09:56:12.761412  278643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:56:12.778118  278643 cli_runner.go:164] Run: docker volume create newest-cni-387337 --label name.minikube.sigs.k8s.io=newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:56:12.796678  278643 oci.go:103] Successfully created a docker volume newest-cni-387337
	I1206 09:56:12.796763  278643 cli_runner.go:164] Run: docker run --rm --name newest-cni-387337-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --entrypoint /usr/bin/test -v newest-cni-387337:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:56:13.320572  278643 oci.go:107] Successfully prepared a docker volume newest-cni-387337
	I1206 09:56:13.320655  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:13.320668  278643 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:56:13.320746  278643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:56:17.285875  278643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.965091453s)
	I1206 09:56:17.285931  278643 kic.go:203] duration metric: took 3.965259503s to extract preloaded images to volume ...
	W1206 09:56:17.286072  278643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 09:56:17.286184  278643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:56:17.343671  278643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-387337 --name newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-387337 --network newest-cni-387337 --ip 192.168.85.2 --volume newest-cni-387337:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:56:17.667864  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Running}}
	I1206 09:56:17.689633  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.721713  278643 cli_runner.go:164] Run: docker exec newest-cni-387337 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:56:17.785391  278643 oci.go:144] the created container "newest-cni-387337" has a running status.
	I1206 09:56:17.785426  278643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa...
	I1206 09:56:17.929044  278643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:56:17.954229  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.978708  278643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:56:17.978729  278643 kic_runner.go:114] Args: [docker exec --privileged newest-cni-387337 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:56:18.030854  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:18.061034  278643 machine.go:94] provisionDockerMachine start ...
	I1206 09:56:18.061129  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:18.105041  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:18.105395  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:18.105412  278643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:56:18.106117  278643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39934->127.0.0.1:33093: read: connection reset by peer
	I1206 09:56:21.259644  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.259668  278643 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 09:56:21.259730  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.280819  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.281151  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.281167  278643 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 09:56:21.446750  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.446840  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.466708  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.467034  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.467060  278643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:56:21.636152  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:56:21.636184  278643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 09:56:21.636206  278643 ubuntu.go:190] setting up certificates
	I1206 09:56:21.636216  278643 provision.go:84] configureAuth start
	I1206 09:56:21.636276  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:21.657085  278643 provision.go:143] copyHostCerts
	I1206 09:56:21.657167  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 09:56:21.657182  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 09:56:21.657287  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 09:56:21.657399  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 09:56:21.657409  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 09:56:21.657439  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 09:56:21.657519  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 09:56:21.657530  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 09:56:21.657556  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 09:56:21.657626  278643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 09:56:22.235324  278643 provision.go:177] copyRemoteCerts
	I1206 09:56:22.235498  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:56:22.235563  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.254382  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.371978  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:56:22.391750  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:56:22.409835  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 09:56:22.427840  278643 provision.go:87] duration metric: took 791.601956ms to configureAuth
	I1206 09:56:22.427871  278643 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:56:22.428075  278643 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:22.428086  278643 machine.go:97] duration metric: took 4.367032221s to provisionDockerMachine
	I1206 09:56:22.428093  278643 client.go:176] duration metric: took 9.792082753s to LocalClient.Create
	I1206 09:56:22.428116  278643 start.go:167] duration metric: took 9.792160612s to libmachine.API.Create "newest-cni-387337"
	I1206 09:56:22.428128  278643 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 09:56:22.428139  278643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:56:22.428194  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:56:22.428238  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.445246  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.552047  278643 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:56:22.555602  278643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:56:22.555631  278643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:56:22.555643  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 09:56:22.555699  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 09:56:22.555780  278643 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 09:56:22.555887  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:56:22.563581  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:22.582270  278643 start.go:296] duration metric: took 154.127995ms for postStartSetup
	I1206 09:56:22.582688  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.600191  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:22.600480  278643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:56:22.600532  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.618476  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.721461  278643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:56:22.732754  278643 start.go:128] duration metric: took 10.100506966s to createHost
	I1206 09:56:22.732791  278643 start.go:83] releasing machines lock for "newest-cni-387337", held for 10.100657655s
	I1206 09:56:22.732898  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.752253  278643 ssh_runner.go:195] Run: cat /version.json
	I1206 09:56:22.752314  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.752332  278643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:56:22.752395  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.774900  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.786887  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.981230  278643 ssh_runner.go:195] Run: systemctl --version
	I1206 09:56:22.988594  278643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:56:22.993872  278643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:56:22.993970  278643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:56:23.036477  278643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 09:56:23.036554  278643 start.go:496] detecting cgroup driver to use...
	I1206 09:56:23.036604  278643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 09:56:23.036691  278643 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:56:23.053535  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:56:23.068263  278643 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:56:23.068359  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:56:23.086894  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:56:23.106796  278643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:56:23.229113  278643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:56:23.353681  278643 docker.go:234] disabling docker service ...
	I1206 09:56:23.353777  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:56:23.376315  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:56:23.389550  278643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:56:23.511242  278643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:56:23.632737  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:56:23.646096  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:56:23.661684  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:56:23.671182  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:56:23.680434  278643 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:56:23.680559  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:56:23.689627  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.698546  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:56:23.707890  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.719929  278643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:56:23.733633  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:56:23.743339  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:56:23.753107  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:56:23.763042  278643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:56:23.772383  278643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:56:23.783215  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:23.897379  278643 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:56:24.034106  278643 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:56:24.034227  278643 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:56:24.038555  278643 start.go:564] Will wait 60s for crictl version
	I1206 09:56:24.038667  278643 ssh_runner.go:195] Run: which crictl
	I1206 09:56:24.042893  278643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:56:24.073212  278643 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 09:56:24.073340  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.100352  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.125479  278643 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 09:56:24.128585  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:24.145134  278643 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:56:24.149083  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.161762  278643 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 09:56:24.164661  278643 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:56:24.164804  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:24.164892  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.190128  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.190154  278643 containerd.go:534] Images already preloaded, skipping extraction
	I1206 09:56:24.190214  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.214192  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.214220  278643 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:56:24.214229  278643 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 09:56:24.214329  278643 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:56:24.214400  278643 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:56:24.241654  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:24.241679  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:24.241702  278643 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 09:56:24.241726  278643 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:56:24.241847  278643 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:56:24.241920  278643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:56:24.250168  278643 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:56:24.250236  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:56:24.259935  278643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 09:56:24.273892  278643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:56:24.288011  278643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 09:56:24.300649  278643 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:56:24.304319  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.314437  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:24.421252  278643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:56:24.437400  278643 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 09:56:24.437465  278643 certs.go:195] generating shared ca certs ...
	I1206 09:56:24.437496  278643 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.437676  278643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 09:56:24.437744  278643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 09:56:24.437767  278643 certs.go:257] generating profile certs ...
	I1206 09:56:24.437853  278643 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 09:56:24.437892  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt with IP's: []
	I1206 09:56:24.906874  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt ...
	I1206 09:56:24.906907  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt: {Name:mk3786951ca6b934a39ce0b897be0476ac498386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907112  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key ...
	I1206 09:56:24.907126  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key: {Name:mk400b28e78f0247222772118d8e6e5e81e847c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907230  278643 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 09:56:24.907249  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 09:56:25.112458  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd ...
	I1206 09:56:25.112494  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd: {Name:mk0b66241f430a839566e8733856f4f7778dd203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.112675  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd ...
	I1206 09:56:25.113433  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd: {Name:mk545f1d084e139bf8c177372caec577367d5287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.113573  278643 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt
	I1206 09:56:25.113667  278643 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key
	I1206 09:56:25.113729  278643 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 09:56:25.113755  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt with IP's: []
	I1206 09:56:25.390925  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt ...
	I1206 09:56:25.390958  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt: {Name:mkf533c4c7795dfadd5e4919382846ec6f68f803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391162  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key ...
	I1206 09:56:25.391180  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key: {Name:mk080dba3e2186a2cc27fdce20eb9b0d79705a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391368  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 09:56:25.391429  278643 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 09:56:25.391438  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:56:25.391466  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 09:56:25.391497  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:56:25.391527  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 09:56:25.391576  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:25.392167  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:56:25.411617  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:56:25.431093  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:56:25.449697  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:56:25.468550  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:56:25.487105  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:56:25.505768  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:56:25.525108  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:56:25.543500  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:56:25.562465  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 09:56:25.580776  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 09:56:25.598408  278643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:56:25.612387  278643 ssh_runner.go:195] Run: openssl version
	I1206 09:56:25.618822  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.626357  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:56:25.633933  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637838  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637908  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.679350  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:56:25.686883  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:56:25.694288  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.701929  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 09:56:25.709757  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.713960  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.714081  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.755271  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:56:25.762807  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 09:56:25.770247  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.777748  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 09:56:25.785439  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789191  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789278  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.830268  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.837948  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.845509  278643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:56:25.849323  278643 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:56:25.849426  278643 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:25.849528  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:56:25.849591  278643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:56:25.875432  278643 cri.go:89] found id: ""
	I1206 09:56:25.875532  278643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:56:25.883715  278643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:56:25.891695  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:25.891813  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:25.899921  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:25.899941  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:25.900032  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:25.908195  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:25.908312  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:25.916060  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:25.924068  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:25.924164  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:25.931858  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.939818  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:25.939921  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.948338  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:25.956152  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:25.956247  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:25.963660  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:26.031399  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:26.031465  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:26.131684  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:26.131760  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:26.131802  278643 kubeadm.go:319] OS: Linux
	I1206 09:56:26.131854  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:26.131907  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:26.131958  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:26.132009  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:26.132062  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:26.132114  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:26.132163  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:26.132215  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:26.132262  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:26.203361  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:26.203507  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:26.203605  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:26.210048  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:26.216410  278643 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:26.216591  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:26.216714  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:26.398131  278643 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:26.614015  278643 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:27.159843  278643 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:27.364968  278643 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:27.669555  278643 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:27.669750  278643 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.021664  278643 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:28.022031  278643 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.806854  278643 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:29.101949  278643 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:29.804533  278643 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:29.804903  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:30.341296  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:30.816858  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:30.960618  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:31.211332  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:31.505498  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:31.506301  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:31.509226  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:31.513473  278643 out.go:252]   - Booting up control plane ...
	I1206 09:56:31.513588  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:31.513674  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:31.513746  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:31.531878  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:31.532005  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:31.540494  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:31.540946  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:31.541222  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:31.688292  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:31.688412  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:56:52.131955  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 09:56:52.131990  265222 kubeadm.go:319] 
	I1206 09:56:52.132057  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:56:52.135086  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:52.135149  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:52.135269  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:52.135335  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:52.135398  265222 kubeadm.go:319] OS: Linux
	I1206 09:56:52.135462  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:52.135528  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:52.135580  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:52.135635  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:52.135687  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:52.135753  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:52.135820  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:52.135888  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:52.135938  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:52.136021  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:52.136130  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:52.136253  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:52.136339  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:52.141711  265222 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:52.141840  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:52.141916  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:52.141987  265222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:52.142053  265222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:52.142117  265222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:52.142167  265222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:52.142231  265222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:52.142358  265222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142411  265222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:52.142534  265222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142602  265222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:52.142665  265222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:52.142714  265222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:52.142774  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:52.142827  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:52.142886  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:52.142942  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:52.143007  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:52.143067  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:52.143146  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:52.143212  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:52.146063  265222 out.go:252]   - Booting up control plane ...
	I1206 09:56:52.146187  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:52.146272  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:52.146343  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:52.146451  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:52.146548  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:52.146656  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:52.146744  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:52.146786  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:52.146923  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:52.147038  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:56:52.147107  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000059514s
	I1206 09:56:52.147115  265222 kubeadm.go:319] 
	I1206 09:56:52.147172  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:56:52.147209  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:56:52.147316  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:56:52.147324  265222 kubeadm.go:319] 
	I1206 09:56:52.147528  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:56:52.147567  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:56:52.147602  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1206 09:56:52.147720  265222 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000059514s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:56:52.147812  265222 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:56:52.147999  265222 kubeadm.go:319] 
	I1206 09:56:52.558950  265222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:56:52.574085  265222 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:52.574157  265222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:52.583215  265222 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:52.583237  265222 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:52.583290  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:52.592240  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:52.592330  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:52.601081  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:52.609915  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:52.609987  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:52.618677  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.627409  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:52.627476  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.635636  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:52.644224  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:52.644339  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:52.652667  265222 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:52.772328  265222 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:56:52.772790  265222 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:56:52.844974  265222 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:31.687178  278643 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000277363s
	I1206 10:00:31.687206  278643 kubeadm.go:319] 
	I1206 10:00:31.687552  278643 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:31.687635  278643 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:31.687823  278643 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:31.687982  278643 kubeadm.go:319] 
	I1206 10:00:31.688177  278643 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:31.688245  278643 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:31.688306  278643 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:31.688315  278643 kubeadm.go:319] 
	I1206 10:00:31.693600  278643 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:00:31.694063  278643 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:00:31.694183  278643 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:31.694443  278643 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:31.694449  278643 kubeadm.go:319] 
	I1206 10:00:31.694518  278643 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:31.694644  278643 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277363s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:00:31.694791  278643 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:00:32.113107  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:00:32.127511  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:00:32.127586  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:00:32.136075  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:00:32.136153  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 10:00:32.136236  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:00:32.144649  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:00:32.144725  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:00:32.152703  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:00:32.160876  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:00:32.160972  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:00:32.168760  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.176761  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:00:32.176847  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.184483  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:00:32.192491  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:00:32.192587  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:00:32.200531  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:00:32.244871  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:32.244955  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:32.319347  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:32.319474  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:32.319533  278643 kubeadm.go:319] OS: Linux
	I1206 10:00:32.319600  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:32.319668  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:32.319735  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:32.319804  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:32.319871  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:32.319938  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:32.320001  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:32.320072  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:32.320138  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:32.391588  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:32.391743  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:32.391866  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:32.399952  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:32.402951  278643 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:32.403119  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:32.403230  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:32.403363  278643 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:32.403527  278643 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:32.403636  278643 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:32.403722  278643 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:32.403826  278643 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:32.403924  278643 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:32.404038  278643 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:32.404152  278643 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:32.404224  278643 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:32.404311  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:32.618555  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:32.763900  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:33.042172  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:33.120040  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:33.316584  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:33.317337  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:33.321890  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:33.325150  278643 out.go:252]   - Booting up control plane ...
	I1206 10:00:33.325263  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:33.325348  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:33.327141  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:33.349232  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:33.349348  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:33.357825  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:33.358422  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:33.358496  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:33.491546  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:33.491691  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:53.947913  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:53.947946  265222 kubeadm.go:319] 
	I1206 10:00:53.948017  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:00:53.951147  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:53.951214  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:53.951335  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:53.951432  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:53.951494  265222 kubeadm.go:319] OS: Linux
	I1206 10:00:53.951590  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:53.951656  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:53.951713  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:53.951772  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:53.951824  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:53.951883  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:53.951934  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:53.952003  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:53.952063  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:53.952142  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:53.952242  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:53.952340  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:53.952409  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:53.955466  265222 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:53.955575  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:53.955645  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:53.955732  265222 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:53.955795  265222 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:53.955896  265222 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:53.955964  265222 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:53.956029  265222 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:53.956091  265222 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:53.956171  265222 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:53.956285  265222 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:53.956334  265222 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:53.956402  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:53.956467  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:53.956544  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:53.956617  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:53.956688  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:53.956748  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:53.956848  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:53.956936  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:53.961787  265222 out.go:252]   - Booting up control plane ...
	I1206 10:00:53.961906  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:53.961995  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:53.962068  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:53.962176  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:53.962277  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:53.962386  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:53.962474  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:53.962516  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:53.962650  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:53.962758  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:53.962827  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001252323s
	I1206 10:00:53.962835  265222 kubeadm.go:319] 
	I1206 10:00:53.962892  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:53.962934  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:53.963049  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:53.963060  265222 kubeadm.go:319] 
	I1206 10:00:53.963164  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:53.963200  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:53.963233  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:53.963298  265222 kubeadm.go:403] duration metric: took 8m6.382652277s to StartCluster
	I1206 10:00:53.963352  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:00:53.963521  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:00:53.963616  265222 kubeadm.go:319] 
	I1206 10:00:53.989223  265222 cri.go:89] found id: ""
	I1206 10:00:53.989249  265222 logs.go:282] 0 containers: []
	W1206 10:00:53.989258  265222 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:00:53.989265  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:00:53.989329  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:00:54.028962  265222 cri.go:89] found id: ""
	I1206 10:00:54.029000  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.029010  265222 logs.go:284] No container was found matching "etcd"
	I1206 10:00:54.029026  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:00:54.029137  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:00:54.055727  265222 cri.go:89] found id: ""
	I1206 10:00:54.055751  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.055760  265222 logs.go:284] No container was found matching "coredns"
	I1206 10:00:54.055766  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:00:54.055826  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:00:54.086040  265222 cri.go:89] found id: ""
	I1206 10:00:54.086066  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.086080  265222 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:00:54.086088  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:00:54.086232  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:00:54.112094  265222 cri.go:89] found id: ""
	I1206 10:00:54.112119  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.112127  265222 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:00:54.112134  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:00:54.112192  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:00:54.141768  265222 cri.go:89] found id: ""
	I1206 10:00:54.141793  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.141802  265222 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:00:54.141808  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:00:54.141867  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:00:54.168313  265222 cri.go:89] found id: ""
	I1206 10:00:54.168338  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.168347  265222 logs.go:284] No container was found matching "kindnet"
	I1206 10:00:54.168357  265222 logs.go:123] Gathering logs for kubelet ...
	I1206 10:00:54.168368  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:00:54.224543  265222 logs.go:123] Gathering logs for dmesg ...
	I1206 10:00:54.224578  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:00:54.238829  265222 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:00:54.238859  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:00:54.301151  265222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:00:54.301174  265222 logs.go:123] Gathering logs for containerd ...
	I1206 10:00:54.301185  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:00:54.345045  265222 logs.go:123] Gathering logs for container status ...
	I1206 10:00:54.345077  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:00:54.376879  265222 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:54.376928  265222 out.go:285] * 
	W1206 10:00:54.376993  265222 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.377007  265222 out.go:285] * 
	W1206 10:00:54.379146  265222 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:00:54.386374  265222 out.go:203] 
	W1206 10:00:54.389309  265222 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.389364  265222 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:00:54.389414  265222 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:00:54.392565  265222 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 09:52:38 no-preload-257359 containerd[759]: time="2025-12-06T09:52:38.191823536Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.275178603Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.277620284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.285785629Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.304007334Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.343348725Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.345594679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.355341259Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.356240418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.412021767Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.415665946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.424382780Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.425219514Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.916947622Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.919648694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.927607211Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.928479671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.049537462Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.052567103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.061652310Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.067188454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.436051059Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.438287839Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.445166461Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.446411675Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:00:55.597032    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:55.597646    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:55.599450    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:55.600194    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:55.601846    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:00:55 up  1:43,  0 user,  load average: 0.34, 1.47, 2.14
	Linux no-preload-257359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:00:52 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:52 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 06 10:00:52 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:52 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:53 no-preload-257359 kubelet[5343]: E1206 10:00:53.044817    5343 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:53 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:53 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:53 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 06 10:00:53 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:53 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:53 no-preload-257359 kubelet[5349]: E1206 10:00:53.781498    5349 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:53 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:53 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:54 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 10:00:54 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:54 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:54 no-preload-257359 kubelet[5434]: E1206 10:00:54.608161    5434 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:54 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:54 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:55 no-preload-257359 kubelet[5462]: E1206 10:00:55.306984    5462 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:55 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:55 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359
E1206 10:00:55.754663    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359: exit status 6 (383.143475ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:00:56.113622  284911 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-257359" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (510.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (503.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 09:56:12.433160    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:56:14.995516    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:56:20.117159    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:56:30.358544    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:56:36.062195    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:56:50.841066    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:31.803533    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:57.331046    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:58:53.725600    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:39.146248    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:42.815620    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:42.822586    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:42.834142    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:42.855754    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:42.897253    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:42.978806    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:43.140425    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:43.462170    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:44.104500    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:45.386204    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:47.947570    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:53.069915    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:00:03.311550    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:00:23.793586    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:00:38.822549    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.627159251s)

                                                
                                                
-- stdout --
	* [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:56:12.381215  278643 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:56:12.381413  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381441  278643 out.go:374] Setting ErrFile to fd 2...
	I1206 09:56:12.381461  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381758  278643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:56:12.382257  278643 out.go:368] Setting JSON to false
	I1206 09:56:12.383240  278643 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5924,"bootTime":1765009049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:56:12.383355  278643 start.go:143] virtualization:  
	I1206 09:56:12.387258  278643 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:56:12.391484  278643 notify.go:221] Checking for updates...
	I1206 09:56:12.391496  278643 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:56:12.394851  278643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:56:12.398015  278643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:56:12.400990  278643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:56:12.403944  278643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:56:12.407028  278643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:56:12.410729  278643 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:12.410840  278643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:56:12.445065  278643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:56:12.445213  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.519754  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.507997479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.519868  278643 docker.go:319] overlay module found
	I1206 09:56:12.523177  278643 out.go:179] * Using the docker driver based on user configuration
	I1206 09:56:12.526466  278643 start.go:309] selected driver: docker
	I1206 09:56:12.526501  278643 start.go:927] validating driver "docker" against <nil>
	I1206 09:56:12.526518  278643 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:56:12.527486  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.593335  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.584358845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.593500  278643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 09:56:12.593524  278643 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 09:56:12.593752  278643 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:56:12.596647  278643 out.go:179] * Using Docker driver with root privileges
	I1206 09:56:12.599543  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:12.599621  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:12.599637  278643 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:56:12.599733  278643 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:12.602953  278643 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 09:56:12.605789  278643 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 09:56:12.608936  278643 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:56:12.611867  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:12.611918  278643 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 09:56:12.611946  278643 cache.go:65] Caching tarball of preloaded images
	I1206 09:56:12.611951  278643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:56:12.612037  278643 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 09:56:12.612047  278643 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 09:56:12.612154  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:12.612171  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json: {Name:mk449f962f0653f31dbbb03aed6f74703a91443a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:12.631940  278643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:56:12.631967  278643 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:56:12.631982  278643 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:56:12.632013  278643 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:56:12.632117  278643 start.go:364] duration metric: took 83.89µs to acquireMachinesLock for "newest-cni-387337"
	I1206 09:56:12.632148  278643 start.go:93] Provisioning new machine with config: &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:56:12.632223  278643 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:56:12.635711  278643 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:56:12.635957  278643 start.go:159] libmachine.API.Create for "newest-cni-387337" (driver="docker")
	I1206 09:56:12.635999  278643 client.go:173] LocalClient.Create starting
	I1206 09:56:12.636069  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 09:56:12.636109  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636134  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636197  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 09:56:12.636218  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636234  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636615  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:56:12.654202  278643 cli_runner.go:211] docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:56:12.654286  278643 network_create.go:284] running [docker network inspect newest-cni-387337] to gather additional debugging logs...
	I1206 09:56:12.654307  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337
	W1206 09:56:12.674169  278643 cli_runner.go:211] docker network inspect newest-cni-387337 returned with exit code 1
	I1206 09:56:12.674197  278643 network_create.go:287] error running [docker network inspect newest-cni-387337]: docker network inspect newest-cni-387337: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-387337 not found
	I1206 09:56:12.674213  278643 network_create.go:289] output of [docker network inspect newest-cni-387337]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-387337 not found
	
	** /stderr **
	I1206 09:56:12.674320  278643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:12.697162  278643 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
	I1206 09:56:12.697876  278643 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6479799cc46a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:b3:f8:bd:10:a1} reservation:<nil>}
	I1206 09:56:12.698630  278643 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-045bb1cdddf9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:c6:f0:a4:f5:8d} reservation:<nil>}
	I1206 09:56:12.699284  278643 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b05bfbfa5536 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:01:4f:ea:ac:91} reservation:<nil>}
	I1206 09:56:12.700138  278643 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5b80}
	I1206 09:56:12.700211  278643 network_create.go:124] attempt to create docker network newest-cni-387337 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 09:56:12.700393  278643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-387337 newest-cni-387337
	I1206 09:56:12.761289  278643 network_create.go:108] docker network newest-cni-387337 192.168.85.0/24 created
	I1206 09:56:12.761339  278643 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-387337" container
	I1206 09:56:12.761412  278643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:56:12.778118  278643 cli_runner.go:164] Run: docker volume create newest-cni-387337 --label name.minikube.sigs.k8s.io=newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:56:12.796678  278643 oci.go:103] Successfully created a docker volume newest-cni-387337
	I1206 09:56:12.796763  278643 cli_runner.go:164] Run: docker run --rm --name newest-cni-387337-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --entrypoint /usr/bin/test -v newest-cni-387337:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:56:13.320572  278643 oci.go:107] Successfully prepared a docker volume newest-cni-387337
	I1206 09:56:13.320655  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:13.320668  278643 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:56:13.320746  278643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:56:17.285875  278643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.965091453s)
	I1206 09:56:17.285931  278643 kic.go:203] duration metric: took 3.965259503s to extract preloaded images to volume ...
	W1206 09:56:17.286072  278643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 09:56:17.286184  278643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:56:17.343671  278643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-387337 --name newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-387337 --network newest-cni-387337 --ip 192.168.85.2 --volume newest-cni-387337:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:56:17.667864  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Running}}
	I1206 09:56:17.689633  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.721713  278643 cli_runner.go:164] Run: docker exec newest-cni-387337 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:56:17.785391  278643 oci.go:144] the created container "newest-cni-387337" has a running status.
	I1206 09:56:17.785426  278643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa...
	I1206 09:56:17.929044  278643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:56:17.954229  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.978708  278643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:56:17.978729  278643 kic_runner.go:114] Args: [docker exec --privileged newest-cni-387337 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:56:18.030854  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:18.061034  278643 machine.go:94] provisionDockerMachine start ...
	I1206 09:56:18.061129  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:18.105041  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:18.105395  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:18.105412  278643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:56:18.106117  278643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39934->127.0.0.1:33093: read: connection reset by peer
	I1206 09:56:21.259644  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.259668  278643 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 09:56:21.259730  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.280819  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.281151  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.281167  278643 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 09:56:21.446750  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.446840  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.466708  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.467034  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.467060  278643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:56:21.636152  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:56:21.636184  278643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 09:56:21.636206  278643 ubuntu.go:190] setting up certificates
	I1206 09:56:21.636216  278643 provision.go:84] configureAuth start
	I1206 09:56:21.636276  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:21.657085  278643 provision.go:143] copyHostCerts
	I1206 09:56:21.657167  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 09:56:21.657182  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 09:56:21.657287  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 09:56:21.657399  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 09:56:21.657409  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 09:56:21.657439  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 09:56:21.657519  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 09:56:21.657530  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 09:56:21.657556  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 09:56:21.657626  278643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 09:56:22.235324  278643 provision.go:177] copyRemoteCerts
	I1206 09:56:22.235498  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:56:22.235563  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.254382  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.371978  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:56:22.391750  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:56:22.409835  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 09:56:22.427840  278643 provision.go:87] duration metric: took 791.601956ms to configureAuth
	I1206 09:56:22.427871  278643 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:56:22.428075  278643 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:22.428086  278643 machine.go:97] duration metric: took 4.367032221s to provisionDockerMachine
	I1206 09:56:22.428093  278643 client.go:176] duration metric: took 9.792082753s to LocalClient.Create
	I1206 09:56:22.428116  278643 start.go:167] duration metric: took 9.792160612s to libmachine.API.Create "newest-cni-387337"
	I1206 09:56:22.428128  278643 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 09:56:22.428139  278643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:56:22.428194  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:56:22.428238  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.445246  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.552047  278643 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:56:22.555602  278643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:56:22.555631  278643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:56:22.555643  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 09:56:22.555699  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 09:56:22.555780  278643 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 09:56:22.555887  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:56:22.563581  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:22.582270  278643 start.go:296] duration metric: took 154.127995ms for postStartSetup
	I1206 09:56:22.582688  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.600191  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:22.600480  278643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:56:22.600532  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.618476  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.721461  278643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:56:22.732754  278643 start.go:128] duration metric: took 10.100506966s to createHost
	I1206 09:56:22.732791  278643 start.go:83] releasing machines lock for "newest-cni-387337", held for 10.100657655s
	I1206 09:56:22.732898  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.752253  278643 ssh_runner.go:195] Run: cat /version.json
	I1206 09:56:22.752314  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.752332  278643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:56:22.752395  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.774900  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.786887  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.981230  278643 ssh_runner.go:195] Run: systemctl --version
	I1206 09:56:22.988594  278643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:56:22.993872  278643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:56:22.993970  278643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:56:23.036477  278643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 09:56:23.036554  278643 start.go:496] detecting cgroup driver to use...
	I1206 09:56:23.036604  278643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 09:56:23.036691  278643 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:56:23.053535  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:56:23.068263  278643 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:56:23.068359  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:56:23.086894  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:56:23.106796  278643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:56:23.229113  278643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:56:23.353681  278643 docker.go:234] disabling docker service ...
	I1206 09:56:23.353777  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:56:23.376315  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:56:23.389550  278643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:56:23.511242  278643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:56:23.632737  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:56:23.646096  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:56:23.661684  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:56:23.671182  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:56:23.680434  278643 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:56:23.680559  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:56:23.689627  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.698546  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:56:23.707890  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.719929  278643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:56:23.733633  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:56:23.743339  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:56:23.753107  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:56:23.763042  278643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:56:23.772383  278643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:56:23.783215  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:23.897379  278643 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:56:24.034106  278643 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:56:24.034227  278643 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:56:24.038555  278643 start.go:564] Will wait 60s for crictl version
	I1206 09:56:24.038667  278643 ssh_runner.go:195] Run: which crictl
	I1206 09:56:24.042893  278643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:56:24.073212  278643 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 09:56:24.073340  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.100352  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.125479  278643 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 09:56:24.128585  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:24.145134  278643 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:56:24.149083  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.161762  278643 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 09:56:24.164661  278643 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:56:24.164804  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:24.164892  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.190128  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.190154  278643 containerd.go:534] Images already preloaded, skipping extraction
	I1206 09:56:24.190214  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.214192  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.214220  278643 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:56:24.214229  278643 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 09:56:24.214329  278643 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:56:24.214400  278643 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:56:24.241654  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:24.241679  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:24.241702  278643 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 09:56:24.241726  278643 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:56:24.241847  278643 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:56:24.241920  278643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:56:24.250168  278643 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:56:24.250236  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:56:24.259935  278643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 09:56:24.273892  278643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:56:24.288011  278643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 09:56:24.300649  278643 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:56:24.304319  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.314437  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:24.421252  278643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:56:24.437400  278643 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 09:56:24.437465  278643 certs.go:195] generating shared ca certs ...
	I1206 09:56:24.437496  278643 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.437676  278643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 09:56:24.437744  278643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 09:56:24.437767  278643 certs.go:257] generating profile certs ...
	I1206 09:56:24.437853  278643 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 09:56:24.437892  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt with IP's: []
	I1206 09:56:24.906874  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt ...
	I1206 09:56:24.906907  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt: {Name:mk3786951ca6b934a39ce0b897be0476ac498386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907112  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key ...
	I1206 09:56:24.907126  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key: {Name:mk400b28e78f0247222772118d8e6e5e81e847c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907230  278643 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 09:56:24.907249  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 09:56:25.112458  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd ...
	I1206 09:56:25.112494  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd: {Name:mk0b66241f430a839566e8733856f4f7778dd203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.112675  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd ...
	I1206 09:56:25.113433  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd: {Name:mk545f1d084e139bf8c177372caec577367d5287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.113573  278643 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt
	I1206 09:56:25.113667  278643 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key
	I1206 09:56:25.113729  278643 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 09:56:25.113755  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt with IP's: []
	I1206 09:56:25.390925  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt ...
	I1206 09:56:25.390958  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt: {Name:mkf533c4c7795dfadd5e4919382846ec6f68f803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391162  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key ...
	I1206 09:56:25.391180  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key: {Name:mk080dba3e2186a2cc27fdce20eb9b0d79705a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391368  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 09:56:25.391429  278643 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 09:56:25.391438  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:56:25.391466  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 09:56:25.391497  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:56:25.391527  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 09:56:25.391576  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:25.392167  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:56:25.411617  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:56:25.431093  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:56:25.449697  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:56:25.468550  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:56:25.487105  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:56:25.505768  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:56:25.525108  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:56:25.543500  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:56:25.562465  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 09:56:25.580776  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 09:56:25.598408  278643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:56:25.612387  278643 ssh_runner.go:195] Run: openssl version
	I1206 09:56:25.618822  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.626357  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:56:25.633933  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637838  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637908  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.679350  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:56:25.686883  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:56:25.694288  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.701929  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 09:56:25.709757  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.713960  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.714081  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.755271  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:56:25.762807  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 09:56:25.770247  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.777748  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 09:56:25.785439  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789191  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789278  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.830268  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.837948  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.845509  278643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:56:25.849323  278643 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:56:25.849426  278643 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:25.849528  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:56:25.849591  278643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:56:25.875432  278643 cri.go:89] found id: ""
	I1206 09:56:25.875532  278643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:56:25.883715  278643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:56:25.891695  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:25.891813  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:25.899921  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:25.899941  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:25.900032  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:25.908195  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:25.908312  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:25.916060  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:25.924068  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:25.924164  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:25.931858  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.939818  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:25.939921  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.948338  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:25.956152  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:25.956247  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:25.963660  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:26.031399  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:26.031465  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:26.131684  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:26.131760  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:26.131802  278643 kubeadm.go:319] OS: Linux
	I1206 09:56:26.131854  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:26.131907  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:26.131958  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:26.132009  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:26.132062  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:26.132114  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:26.132163  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:26.132215  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:26.132262  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:26.203361  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:26.203507  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:26.203605  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:26.210048  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:26.216410  278643 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:26.216591  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:26.216714  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:26.398131  278643 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:26.614015  278643 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:27.159843  278643 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:27.364968  278643 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:27.669555  278643 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:27.669750  278643 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.021664  278643 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:28.022031  278643 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.806854  278643 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:29.101949  278643 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:29.804533  278643 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:29.804903  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:30.341296  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:30.816858  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:30.960618  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:31.211332  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:31.505498  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:31.506301  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:31.509226  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:31.513473  278643 out.go:252]   - Booting up control plane ...
	I1206 09:56:31.513588  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:31.513674  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:31.513746  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:31.531878  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:31.532005  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:31.540494  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:31.540946  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:31.541222  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:31.688292  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:31.688412  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:31.687178  278643 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000277363s
	I1206 10:00:31.687206  278643 kubeadm.go:319] 
	I1206 10:00:31.687552  278643 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:31.687635  278643 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:31.687823  278643 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:31.687982  278643 kubeadm.go:319] 
	I1206 10:00:31.688177  278643 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:31.688245  278643 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:31.688306  278643 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:31.688315  278643 kubeadm.go:319] 
	I1206 10:00:31.693600  278643 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:00:31.694063  278643 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:00:31.694183  278643 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:31.694443  278643 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:31.694449  278643 kubeadm.go:319] 
	I1206 10:00:31.694518  278643 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:31.694644  278643 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277363s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277363s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:00:31.694791  278643 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:00:32.113107  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:00:32.127511  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:00:32.127586  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:00:32.136075  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:00:32.136153  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 10:00:32.136236  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:00:32.144649  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:00:32.144725  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:00:32.152703  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:00:32.160876  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:00:32.160972  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:00:32.168760  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.176761  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:00:32.176847  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.184483  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:00:32.192491  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:00:32.192587  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:00:32.200531  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:00:32.244871  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:32.244955  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:32.319347  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:32.319474  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:32.319533  278643 kubeadm.go:319] OS: Linux
	I1206 10:00:32.319600  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:32.319668  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:32.319735  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:32.319804  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:32.319871  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:32.319938  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:32.320001  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:32.320072  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:32.320138  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:32.391588  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:32.391743  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:32.391866  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:32.399952  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:32.402951  278643 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:32.403119  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:32.403230  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:32.403363  278643 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:32.403527  278643 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:32.403636  278643 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:32.403722  278643 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:32.403826  278643 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:32.403924  278643 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:32.404038  278643 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:32.404152  278643 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:32.404224  278643 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:32.404311  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:32.618555  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:32.763900  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:33.042172  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:33.120040  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:33.316584  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:33.317337  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:33.321890  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:33.325150  278643 out.go:252]   - Booting up control plane ...
	I1206 10:00:33.325263  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:33.325348  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:33.327141  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:33.349232  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:33.349348  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:33.357825  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:33.358422  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:33.358496  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:33.491546  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:33.491691  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:04:33.492075  278643 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000455959s
	I1206 10:04:33.497324  278643 kubeadm.go:319] 
	I1206 10:04:33.497409  278643 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:04:33.497452  278643 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:04:33.497564  278643 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:04:33.497573  278643 kubeadm.go:319] 
	I1206 10:04:33.497680  278643 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:04:33.497715  278643 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:04:33.497750  278643 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:04:33.497758  278643 kubeadm.go:319] 
	I1206 10:04:33.509281  278643 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:04:33.509716  278643 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:04:33.509836  278643 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:04:33.510075  278643 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:04:33.510082  278643 kubeadm.go:319] 
	I1206 10:04:33.510156  278643 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:04:33.510222  278643 kubeadm.go:403] duration metric: took 8m7.660801722s to StartCluster
	I1206 10:04:33.510279  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:04:33.510354  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:04:33.557741  278643 cri.go:89] found id: ""
	I1206 10:04:33.557779  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.557788  278643 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:04:33.557796  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:04:33.557870  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:04:33.590687  278643 cri.go:89] found id: ""
	I1206 10:04:33.590716  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.590766  278643 logs.go:284] No container was found matching "etcd"
	I1206 10:04:33.590773  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:04:33.590860  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:04:33.619661  278643 cri.go:89] found id: ""
	I1206 10:04:33.619702  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.619713  278643 logs.go:284] No container was found matching "coredns"
	I1206 10:04:33.619720  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:04:33.619795  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:04:33.645015  278643 cri.go:89] found id: ""
	I1206 10:04:33.645040  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.645050  278643 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:04:33.645056  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:04:33.645136  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:04:33.670104  278643 cri.go:89] found id: ""
	I1206 10:04:33.670173  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.670200  278643 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:04:33.670221  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:04:33.670299  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:04:33.695765  278643 cri.go:89] found id: ""
	I1206 10:04:33.695789  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.695798  278643 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:04:33.695805  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:04:33.695865  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:04:33.722778  278643 cri.go:89] found id: ""
	I1206 10:04:33.722855  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.722877  278643 logs.go:284] No container was found matching "kindnet"
	I1206 10:04:33.722899  278643 logs.go:123] Gathering logs for kubelet ...
	I1206 10:04:33.722939  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:04:33.781701  278643 logs.go:123] Gathering logs for dmesg ...
	I1206 10:04:33.781737  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:04:33.795784  278643 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:04:33.795812  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:04:33.861564  278643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:04:33.852548    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.853295    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.854985    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.855733    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.857242    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:04:33.852548    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.853295    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.854985    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.855733    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.857242    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:04:33.861601  278643 logs.go:123] Gathering logs for containerd ...
	I1206 10:04:33.861614  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:04:33.901364  278643 logs.go:123] Gathering logs for container status ...
	I1206 10:04:33.901402  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:04:33.932222  278643 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:04:33.932285  278643 out.go:285] * 
	* 
	W1206 10:04:33.932340  278643 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:04:33.932357  278643 out.go:285] * 
	* 
	W1206 10:04:33.934488  278643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:04:33.941229  278643 out.go:203] 
	W1206 10:04:33.944295  278643 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:04:33.944336  278643 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:04:33.944358  278643 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:04:33.949524  278643 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-387337
helpers_test.go:243: (dbg) docker inspect newest-cni-387337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	        "Created": "2025-12-06T09:56:17.358293629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:56:17.425249124Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hosts",
	        "LogPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9-json.log",
	        "Name": "/newest-cni-387337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-387337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-387337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	                "LowerDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-387337",
	                "Source": "/var/lib/docker/volumes/newest-cni-387337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-387337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-387337",
	                "name.minikube.sigs.k8s.io": "newest-cni-387337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19ea7d8f996048fa64d4d866afeea4320430f2f98edf98767d2a1c4c6ca3fe99",
	            "SandboxKey": "/var/run/docker/netns/19ea7d8f9960",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-387337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:5d:4c:44:a6:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f42a70d42248e7fb537c8957fc3c9ad0a04046b4da244cdde31b86ebc56a160b",
	                    "EndpointID": "c1491ff939cb05ddcbda7885723e4df86157bca2d9a03aa5f2a86896d137b8fa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-387337",
	                        "e89a14c7a996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337: exit status 6 (351.630328ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:04:34.367710  291094 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-387337" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-387337 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-587884 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-507319                                                                                                                                                                                                                            │ disable-driver-mounts-507319 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ embed-certs-100767 image list --format=json                                                                                                                                                                                                                │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-257359 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-257359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:02:50
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:02:50.560309  287962 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:02:50.560438  287962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:02:50.560447  287962 out.go:374] Setting ErrFile to fd 2...
	I1206 10:02:50.560453  287962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:02:50.560700  287962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:02:50.561041  287962 out.go:368] Setting JSON to false
	I1206 10:02:50.561931  287962 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6322,"bootTime":1765009049,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:02:50.561998  287962 start.go:143] virtualization:  
	I1206 10:02:50.565075  287962 out.go:179] * [no-preload-257359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:02:50.569157  287962 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:02:50.569230  287962 notify.go:221] Checking for updates...
	I1206 10:02:50.575040  287962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:02:50.578100  287962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:02:50.581099  287962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:02:50.584049  287962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:02:50.587045  287962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:02:50.590515  287962 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:02:50.591076  287962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:02:50.613858  287962 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:02:50.613996  287962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:02:50.681770  287962 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:02:50.672313547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:02:50.681877  287962 docker.go:319] overlay module found
	I1206 10:02:50.685299  287962 out.go:179] * Using the docker driver based on existing profile
	I1206 10:02:50.688097  287962 start.go:309] selected driver: docker
	I1206 10:02:50.688133  287962 start.go:927] validating driver "docker" against &{Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:02:50.688234  287962 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:02:50.688955  287962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:02:50.763306  287962 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:02:50.754198972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:02:50.763670  287962 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:02:50.763694  287962 cni.go:84] Creating CNI manager for ""
	I1206 10:02:50.763755  287962 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:02:50.763787  287962 start.go:353] cluster config:
	{Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:02:50.767045  287962 out.go:179] * Starting "no-preload-257359" primary control-plane node in "no-preload-257359" cluster
	I1206 10:02:50.769839  287962 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:02:50.772658  287962 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:02:50.775524  287962 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:02:50.775664  287962 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/config.json ...
	I1206 10:02:50.776024  287962 cache.go:107] acquiring lock: {Name:mkad35cce177b57f018574c39ee8c3c239eb9b07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776116  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 10:02:50.776125  287962 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.204µs
	I1206 10:02:50.776138  287962 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 10:02:50.776152  287962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:02:50.776297  287962 cache.go:107] acquiring lock: {Name:mk5bfca67d26458a19d81fb604def77746df1eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776349  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 10:02:50.776357  287962 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 64.616µs
	I1206 10:02:50.776363  287962 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776373  287962 cache.go:107] acquiring lock: {Name:mk51ddffc8cf367c8f9ab9dab46cca9425ce4f0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776404  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 10:02:50.776409  287962 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.794µs
	I1206 10:02:50.776415  287962 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776424  287962 cache.go:107] acquiring lock: {Name:mkdb80297b5c34ff2c59c7d0547bc50e4c902573 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776457  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 10:02:50.776467  287962 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.57µs
	I1206 10:02:50.776475  287962 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776497  287962 cache.go:107] acquiring lock: {Name:mk507200c1f46ea68c0c2896fa231924d660663f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776525  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 10:02:50.776530  287962 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.002µs
	I1206 10:02:50.776536  287962 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776545  287962 cache.go:107] acquiring lock: {Name:mkf308199b47415a211213857d6d1bca152d3eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776571  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 10:02:50.776576  287962 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 31.213µs
	I1206 10:02:50.776581  287962 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 10:02:50.776589  287962 cache.go:107] acquiring lock: {Name:mk5d1295ea377d97f7962ba416aea9d5b2908db5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776615  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1206 10:02:50.776620  287962 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.77µs
	I1206 10:02:50.776625  287962 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 10:02:50.776635  287962 cache.go:107] acquiring lock: {Name:mk2939303cfab712d7c12da37ef89ab2271b37f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776664  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 10:02:50.776668  287962 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 34.815µs
	I1206 10:02:50.776674  287962 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 10:02:50.776680  287962 cache.go:87] Successfully saved all images to host disk.
	I1206 10:02:50.798946  287962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:02:50.798971  287962 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:02:50.798991  287962 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:02:50.799021  287962 start.go:360] acquireMachinesLock for no-preload-257359: {Name:mk6d92dd7ed626ac67dff0eb9c6415617a7c299c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.799098  287962 start.go:364] duration metric: took 57.026µs to acquireMachinesLock for "no-preload-257359"
	I1206 10:02:50.799124  287962 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:02:50.799130  287962 fix.go:54] fixHost starting: 
	I1206 10:02:50.799434  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:50.817117  287962 fix.go:112] recreateIfNeeded on no-preload-257359: state=Stopped err=<nil>
	W1206 10:02:50.817159  287962 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:02:50.820604  287962 out.go:252] * Restarting existing docker container for "no-preload-257359" ...
	I1206 10:02:50.820691  287962 cli_runner.go:164] Run: docker start no-preload-257359
	I1206 10:02:51.082081  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:51.109151  287962 kic.go:430] container "no-preload-257359" state is running.
	I1206 10:02:51.111028  287962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 10:02:51.134579  287962 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/config.json ...
	I1206 10:02:51.135073  287962 machine.go:94] provisionDockerMachine start ...
	I1206 10:02:51.135154  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:51.160524  287962 main.go:143] libmachine: Using SSH client type: native
	I1206 10:02:51.161106  287962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 10:02:51.161128  287962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:02:51.161871  287962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:02:54.315394  287962 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-257359
	
	I1206 10:02:54.315419  287962 ubuntu.go:182] provisioning hostname "no-preload-257359"
	I1206 10:02:54.315482  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:54.335607  287962 main.go:143] libmachine: Using SSH client type: native
	I1206 10:02:54.335937  287962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 10:02:54.335955  287962 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-257359 && echo "no-preload-257359" | sudo tee /etc/hostname
	I1206 10:02:54.504049  287962 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-257359
	
	I1206 10:02:54.504125  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:54.526012  287962 main.go:143] libmachine: Using SSH client type: native
	I1206 10:02:54.526337  287962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 10:02:54.526359  287962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-257359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-257359/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-257359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:02:54.679778  287962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:02:54.679871  287962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:02:54.679899  287962 ubuntu.go:190] setting up certificates
	I1206 10:02:54.679930  287962 provision.go:84] configureAuth start
	I1206 10:02:54.680010  287962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 10:02:54.697376  287962 provision.go:143] copyHostCerts
	I1206 10:02:54.697458  287962 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:02:54.697469  287962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:02:54.697553  287962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:02:54.697662  287962 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:02:54.697668  287962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:02:54.697694  287962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:02:54.697758  287962 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:02:54.697763  287962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:02:54.697787  287962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:02:54.697840  287962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.no-preload-257359 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-257359]
	I1206 10:02:54.977047  287962 provision.go:177] copyRemoteCerts
	I1206 10:02:54.977148  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:02:54.977221  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:54.995583  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.103869  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:02:55.123476  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:02:55.143183  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:02:55.162544  287962 provision.go:87] duration metric: took 482.585221ms to configureAuth
	I1206 10:02:55.162615  287962 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:02:55.162829  287962 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:02:55.162844  287962 machine.go:97] duration metric: took 4.027747325s to provisionDockerMachine
	I1206 10:02:55.162853  287962 start.go:293] postStartSetup for "no-preload-257359" (driver="docker")
	I1206 10:02:55.162865  287962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:02:55.162921  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:02:55.162965  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.180527  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.287583  287962 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:02:55.291124  287962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:02:55.291151  287962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:02:55.291168  287962 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:02:55.291224  287962 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:02:55.291309  287962 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:02:55.291497  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:02:55.299238  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:02:55.317641  287962 start.go:296] duration metric: took 154.772967ms for postStartSetup
	I1206 10:02:55.317745  287962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:02:55.317837  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.335751  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.440465  287962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:02:55.445127  287962 fix.go:56] duration metric: took 4.645989389s for fixHost
	I1206 10:02:55.445154  287962 start.go:83] releasing machines lock for "no-preload-257359", held for 4.646041311s
	I1206 10:02:55.445251  287962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 10:02:55.462635  287962 ssh_runner.go:195] Run: cat /version.json
	I1206 10:02:55.462693  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.462962  287962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:02:55.463017  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.487975  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.493550  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.591110  287962 ssh_runner.go:195] Run: systemctl --version
	I1206 10:02:55.687501  287962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:02:55.693096  287962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:02:55.693233  287962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:02:55.701547  287962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:02:55.701573  287962 start.go:496] detecting cgroup driver to use...
	I1206 10:02:55.701604  287962 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:02:55.701653  287962 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:02:55.719594  287962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:02:55.734226  287962 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:02:55.734290  287962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:02:55.750404  287962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:02:55.764033  287962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:02:55.874437  287962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:02:56.003896  287962 docker.go:234] disabling docker service ...
	I1206 10:02:56.004020  287962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:02:56.022407  287962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:02:56.039132  287962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:02:56.150673  287962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:02:56.279968  287962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:02:56.293559  287962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:02:56.309015  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:02:56.320264  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:02:56.329394  287962 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:02:56.329501  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:02:56.338337  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:02:56.348542  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:02:56.357278  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:02:56.366102  287962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:02:56.374530  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:02:56.383495  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:02:56.392560  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:02:56.401292  287962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:02:56.408750  287962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:02:56.416046  287962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:02:56.521476  287962 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:02:56.624710  287962 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:02:56.624790  287962 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:02:56.628711  287962 start.go:564] Will wait 60s for crictl version
	I1206 10:02:56.628775  287962 ssh_runner.go:195] Run: which crictl
	I1206 10:02:56.632374  287962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:02:56.660663  287962 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:02:56.660734  287962 ssh_runner.go:195] Run: containerd --version
	I1206 10:02:56.680803  287962 ssh_runner.go:195] Run: containerd --version
	I1206 10:02:56.706136  287962 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:02:56.708890  287962 cli_runner.go:164] Run: docker network inspect no-preload-257359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:02:56.729633  287962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 10:02:56.733998  287962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:02:56.743903  287962 kubeadm.go:884] updating cluster {Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:02:56.744025  287962 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:02:56.744079  287962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:02:56.773425  287962 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:02:56.773444  287962 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:02:56.773451  287962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 10:02:56.773547  287962 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-257359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:02:56.773604  287962 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:02:56.801911  287962 cni.go:84] Creating CNI manager for ""
	I1206 10:02:56.801937  287962 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:02:56.801959  287962 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:02:56.801983  287962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-257359 NodeName:no-preload-257359 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:02:56.802107  287962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-257359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:02:56.802181  287962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:02:56.810040  287962 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:02:56.810160  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:02:56.817847  287962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:02:56.834027  287962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:02:56.847083  287962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 10:02:56.859664  287962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:02:56.863520  287962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:02:56.873266  287962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:02:56.982686  287962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:02:57.002169  287962 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359 for IP: 192.168.76.2
	I1206 10:02:57.002242  287962 certs.go:195] generating shared ca certs ...
	I1206 10:02:57.002272  287962 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.002542  287962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:02:57.002639  287962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:02:57.002674  287962 certs.go:257] generating profile certs ...
	I1206 10:02:57.002879  287962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/client.key
	I1206 10:02:57.003008  287962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key.673fc286
	I1206 10:02:57.003090  287962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key
	I1206 10:02:57.003263  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:02:57.003330  287962 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:02:57.003355  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:02:57.003487  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:02:57.003549  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:02:57.003611  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:02:57.003709  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:02:57.004746  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:02:57.030862  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:02:57.051127  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:02:57.070625  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:02:57.091646  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:02:57.109996  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:02:57.128427  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:02:57.146680  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:02:57.165617  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:02:57.183550  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:02:57.201664  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:02:57.220303  287962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:02:57.233337  287962 ssh_runner.go:195] Run: openssl version
	I1206 10:02:57.240029  287962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.247873  287962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:02:57.255843  287962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.259576  287962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.259660  287962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.301069  287962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:02:57.308859  287962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.316603  287962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:02:57.324324  287962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.328364  287962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.328429  287962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.371448  287962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:02:57.379279  287962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.386821  287962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:02:57.394739  287962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.398636  287962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.398746  287962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.439669  287962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:02:57.447527  287962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:02:57.451414  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:02:57.495635  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:02:57.538757  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:02:57.580199  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:02:57.621554  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:02:57.663093  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:02:57.704506  287962 kubeadm.go:401] StartCluster: {Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:02:57.704612  287962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:02:57.704683  287962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:02:57.737766  287962 cri.go:89] found id: ""
	I1206 10:02:57.737905  287962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:02:57.747113  287962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:02:57.747187  287962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:02:57.747271  287962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:02:57.755581  287962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:02:57.756044  287962 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:02:57.756207  287962 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-257359" cluster setting kubeconfig missing "no-preload-257359" context setting]
	I1206 10:02:57.756524  287962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.758045  287962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:02:57.767197  287962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1206 10:02:57.767270  287962 kubeadm.go:602] duration metric: took 20.064098ms to restartPrimaryControlPlane
	I1206 10:02:57.767298  287962 kubeadm.go:403] duration metric: took 62.801543ms to StartCluster
	I1206 10:02:57.767343  287962 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.767500  287962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:02:57.768125  287962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.768380  287962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:02:57.768778  287962 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:02:57.768818  287962 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:02:57.768907  287962 addons.go:70] Setting storage-provisioner=true in profile "no-preload-257359"
	I1206 10:02:57.768922  287962 addons.go:239] Setting addon storage-provisioner=true in "no-preload-257359"
	I1206 10:02:57.768948  287962 host.go:66] Checking if "no-preload-257359" exists ...
	I1206 10:02:57.769092  287962 addons.go:70] Setting dashboard=true in profile "no-preload-257359"
	I1206 10:02:57.769107  287962 addons.go:239] Setting addon dashboard=true in "no-preload-257359"
	W1206 10:02:57.769113  287962 addons.go:248] addon dashboard should already be in state true
	I1206 10:02:57.769132  287962 host.go:66] Checking if "no-preload-257359" exists ...
	I1206 10:02:57.769421  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.769598  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.771431  287962 addons.go:70] Setting default-storageclass=true in profile "no-preload-257359"
	I1206 10:02:57.771472  287962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-257359"
	I1206 10:02:57.771804  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.774271  287962 out.go:179] * Verifying Kubernetes components...
	I1206 10:02:57.777342  287962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:02:57.814184  287962 addons.go:239] Setting addon default-storageclass=true in "no-preload-257359"
	I1206 10:02:57.814227  287962 host.go:66] Checking if "no-preload-257359" exists ...
	I1206 10:02:57.814645  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.822142  287962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 10:02:57.822210  287962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:02:57.824805  287962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:57.824833  287962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:02:57.824900  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:57.829489  287962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 10:02:57.833709  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 10:02:57.833737  287962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 10:02:57.833810  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:57.854012  287962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:02:57.854037  287962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:02:57.854112  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:57.856277  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:57.890754  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:57.895620  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:58.001418  287962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:02:58.013906  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:58.039554  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:02:58.055658  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 10:02:58.055695  287962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 10:02:58.101580  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 10:02:58.101618  287962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 10:02:58.128504  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 10:02:58.128540  287962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 10:02:58.143820  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 10:02:58.143842  287962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 10:02:58.157352  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 10:02:58.157374  287962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 10:02:58.170340  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 10:02:58.170363  287962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 10:02:58.183841  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 10:02:58.183863  287962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 10:02:58.196825  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 10:02:58.196897  287962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 10:02:58.210321  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:02:58.210397  287962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 10:02:58.225210  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:02:58.721996  287962 node_ready.go:35] waiting up to 6m0s for node "no-preload-257359" to be "Ready" ...
	W1206 10:02:58.722385  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.722423  287962 retry.go:31] will retry after 208.185624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:58.722498  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.722524  287962 retry.go:31] will retry after 257.532203ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:58.722744  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.722763  287962 retry.go:31] will retry after 233.335704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.931351  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:58.956947  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:02:58.980534  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:02:59.025353  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.025390  287962 retry.go:31] will retry after 353.673401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:59.100456  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.100492  287962 retry.go:31] will retry after 331.036919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:59.107099  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.107140  287962 retry.go:31] will retry after 441.449257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.379273  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:59.432019  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:02:59.442471  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.442555  287962 retry.go:31] will retry after 796.609581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:59.506117  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.506155  287962 retry.go:31] will retry after 415.679971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.549272  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:02:59.613494  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.613567  287962 retry.go:31] will retry after 772.999564ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.922714  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:02:59.987770  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.987802  287962 retry.go:31] will retry after 559.230816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.240691  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:03:00.387605  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:00.455516  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.455602  287962 retry.go:31] will retry after 1.187622029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:00.463633  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.463667  287962 retry.go:31] will retry after 1.200867497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.547852  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:00.612093  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.612139  287962 retry.go:31] will retry after 893.435078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:00.722580  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:01.505896  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:01.574850  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.574887  287962 retry.go:31] will retry after 1.48070732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.644272  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:03:01.664837  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:01.713457  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.713495  287962 retry.go:31] will retry after 1.793608766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:01.741247  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.741282  287962 retry.go:31] will retry after 1.808351217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:02.723834  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:03.056692  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:03.120499  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.120617  287962 retry.go:31] will retry after 3.123226715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.507497  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:03:03.550077  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:03.603673  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.603716  287962 retry.go:31] will retry after 1.607269464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:03.627477  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.627509  287962 retry.go:31] will retry after 1.427548448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.055613  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:05.122568  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.122601  287962 retry.go:31] will retry after 4.264191427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.212016  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:05.222808  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:05.272035  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.272069  287962 retry.go:31] will retry after 4.227301864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:06.244562  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:06.309955  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:06.310033  287962 retry.go:31] will retry after 4.216626241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:07.223150  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:09.387517  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:09.457868  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:09.457900  287962 retry.go:31] will retry after 2.71431214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:09.499976  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:09.592059  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:09.592097  287962 retry.go:31] will retry after 2.312821913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:09.722871  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:10.527449  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:10.596453  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:10.596493  287962 retry.go:31] will retry after 5.508635395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:11.905982  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:11.973035  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:11.973068  287962 retry.go:31] will retry after 5.314130156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:12.173390  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:12.223182  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:12.232700  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:12.232730  287962 retry.go:31] will retry after 4.087053557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:14.722724  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:16.105932  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:16.170813  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:16.170847  287962 retry.go:31] will retry after 7.046098386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:16.320412  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:16.383512  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:16.383547  287962 retry.go:31] will retry after 7.362220175s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:16.723439  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:17.287932  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:17.349195  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:17.349229  287962 retry.go:31] will retry after 7.285529113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:19.223445  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:21.722607  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:23.217212  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:23.292880  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:23.292916  287962 retry.go:31] will retry after 20.839138696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:23.746772  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:23.837743  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:23.837784  287962 retry.go:31] will retry after 13.347463373s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:24.222666  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:24.635188  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:24.696400  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:24.696432  287962 retry.go:31] will retry after 15.254736641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:26.722523  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:28.722631  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:30.722708  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:33.222657  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:35.222704  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:37.186329  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:37.223320  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:37.292820  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:37.292848  287962 retry.go:31] will retry after 20.057827776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:39.722636  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:39.952067  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:40.017939  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:40.018752  287962 retry.go:31] will retry after 24.548199368s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:42.222608  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:44.132237  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:44.192642  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:44.192676  287962 retry.go:31] will retry after 19.029425314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:44.223357  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:46.722764  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:49.222520  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:51.222722  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:53.722670  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:56.222685  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:57.351089  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:57.410313  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:57.410344  287962 retry.go:31] will retry after 37.517817356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:58.223443  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:00.723057  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:03.222473  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:04:03.222747  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:03.285193  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:04:03.285231  287962 retry.go:31] will retry after 27.356198279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:04:04.567241  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:04:04.627406  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:04:04.627436  287962 retry.go:31] will retry after 26.195836442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:04:05.722912  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:08.222592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:10.223509  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:12.722603  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:15.222600  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:17.222760  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:19.723361  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:22.222650  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:24.222709  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:26.223343  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:28.722606  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:33.492075  278643 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000455959s
	I1206 10:04:33.497324  278643 kubeadm.go:319] 
	I1206 10:04:33.497409  278643 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:04:33.497452  278643 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:04:33.497564  278643 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:04:33.497573  278643 kubeadm.go:319] 
	I1206 10:04:33.497680  278643 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:04:33.497715  278643 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:04:33.497750  278643 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:04:33.497758  278643 kubeadm.go:319] 
	I1206 10:04:33.509281  278643 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:04:33.509716  278643 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:04:33.509836  278643 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:04:33.510075  278643 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:04:33.510082  278643 kubeadm.go:319] 
	I1206 10:04:33.510156  278643 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:04:33.510222  278643 kubeadm.go:403] duration metric: took 8m7.660801722s to StartCluster
	I1206 10:04:33.510279  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:04:33.510354  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:04:33.557741  278643 cri.go:89] found id: ""
	I1206 10:04:33.557779  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.557788  278643 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:04:33.557796  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:04:33.557870  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:04:33.590687  278643 cri.go:89] found id: ""
	I1206 10:04:33.590716  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.590766  278643 logs.go:284] No container was found matching "etcd"
	I1206 10:04:33.590773  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:04:33.590860  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:04:33.619661  278643 cri.go:89] found id: ""
	I1206 10:04:33.619702  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.619713  278643 logs.go:284] No container was found matching "coredns"
	I1206 10:04:33.619720  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:04:33.619795  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:04:33.645015  278643 cri.go:89] found id: ""
	I1206 10:04:33.645040  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.645050  278643 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:04:33.645056  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:04:33.645136  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:04:33.670104  278643 cri.go:89] found id: ""
	I1206 10:04:33.670173  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.670200  278643 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:04:33.670221  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:04:33.670299  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:04:33.695765  278643 cri.go:89] found id: ""
	I1206 10:04:33.695789  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.695798  278643 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:04:33.695805  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:04:33.695865  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:04:33.722778  278643 cri.go:89] found id: ""
	I1206 10:04:33.722855  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.722877  278643 logs.go:284] No container was found matching "kindnet"
	I1206 10:04:33.722899  278643 logs.go:123] Gathering logs for kubelet ...
	I1206 10:04:33.722939  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:04:33.781701  278643 logs.go:123] Gathering logs for dmesg ...
	I1206 10:04:33.781737  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:04:33.795784  278643 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:04:33.795812  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:04:33.861564  278643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:04:33.852548    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.853295    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.854985    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.855733    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.857242    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:04:33.852548    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.853295    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.854985    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.855733    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.857242    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:04:33.861601  278643 logs.go:123] Gathering logs for containerd ...
	I1206 10:04:33.861614  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:04:33.901364  278643 logs.go:123] Gathering logs for container status ...
	I1206 10:04:33.901402  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:04:33.932222  278643 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:04:33.932285  278643 out.go:285] * 
	W1206 10:04:33.932340  278643 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:04:33.932357  278643 out.go:285] * 
	W1206 10:04:33.934488  278643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:04:33.941229  278643 out.go:203] 
	W1206 10:04:33.944295  278643 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:04:33.944336  278643 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:04:33.944358  278643 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:04:33.949524  278643 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969320814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969390476Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969486092Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969600119Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969665285Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969726053Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969787649Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969847736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969914559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.970006646Z" level=info msg="Connect containerd service"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.970348147Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.971072756Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.988408505Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.988883086Z" level=info msg="Start subscribing containerd event"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.989134961Z" level=info msg="Start recovering state"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.989034045Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030882208Z" level=info msg="Start event monitor"
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030937995Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030948448Z" level=info msg="Start streaming server"
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030960846Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030969552Z" level=info msg="runtime interface starting up..."
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030982828Z" level=info msg="starting plugins..."
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030997007Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 09:56:24 newest-cni-387337 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.033034408Z" level=info msg="containerd successfully booted in 0.087811s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:04:35.111794    4984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:35.112358    4984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:35.114111    4984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:35.114609    4984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:35.116308    4984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:04:35 up  1:47,  0 user,  load average: 0.79, 0.94, 1.77
	Linux newest-cni-387337 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:04:32 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:04:32 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 06 10:04:32 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:04:32 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:04:32 newest-cni-387337 kubelet[4785]: E1206 10:04:32.804351    4785 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:04:32 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:04:32 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:04:33 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 10:04:33 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:04:33 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:04:33 newest-cni-387337 kubelet[4791]: E1206 10:04:33.549040    4791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:04:33 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:04:33 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:04:34 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 10:04:34 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:04:34 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:04:34 newest-cni-387337 kubelet[4885]: E1206 10:04:34.287229    4885 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:04:34 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:04:34 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:04:34 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 10:04:34 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:04:34 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:04:35 newest-cni-387337 kubelet[4978]: E1206 10:04:35.101898    4978 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:04:35 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:04:35 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337: exit status 6 (323.422754ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:04:35.587105  291330 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-387337" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-387337" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (503.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-257359 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-257359 create -f testdata/busybox.yaml: exit status 1 (54.987683ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-257359" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-257359 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-257359
helpers_test.go:243: (dbg) docker inspect no-preload-257359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	        "Created": "2025-12-06T09:52:27.333376101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:27.474519381Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hostname",
	        "HostsPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hosts",
	        "LogPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26-json.log",
	        "Name": "/no-preload-257359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-257359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-257359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	                "LowerDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-257359",
	                "Source": "/var/lib/docker/volumes/no-preload-257359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-257359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-257359",
	                "name.minikube.sigs.k8s.io": "no-preload-257359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9be8b5c820dd4c3fe37c75e77303bf5032a3f74d4c68aab4997b8f54cdf3a70",
	            "SandboxKey": "/var/run/docker/netns/b9be8b5c820d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-257359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:a5:2f:79:60:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b05bfbfa55363c82b2c20e75689dc6d905b9177d9ed6efb1bc4c663e65903cf4",
	                    "EndpointID": "37f42c3d2ab503584211eef52439f3c17e372039f5b35f15d09e7f8a0c022b40",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-257359",
	                        "76494ba86a40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359: exit status 6 (321.984709ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:00:56.512705  285005 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-257359 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p embed-certs-100767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p embed-certs-100767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ image   │ old-k8s-version-587884 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ pause   │ -p old-k8s-version-587884 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ unpause │ -p old-k8s-version-587884 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-507319                                                                                                                                                                                                                            │ disable-driver-mounts-507319 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ embed-certs-100767 image list --format=json                                                                                                                                                                                                                │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:56:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:56:12.381215  278643 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:56:12.381413  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381441  278643 out.go:374] Setting ErrFile to fd 2...
	I1206 09:56:12.381461  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381758  278643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:56:12.382257  278643 out.go:368] Setting JSON to false
	I1206 09:56:12.383240  278643 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5924,"bootTime":1765009049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:56:12.383355  278643 start.go:143] virtualization:  
	I1206 09:56:12.387258  278643 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:56:12.391484  278643 notify.go:221] Checking for updates...
	I1206 09:56:12.391496  278643 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:56:12.394851  278643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:56:12.398015  278643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:56:12.400990  278643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:56:12.403944  278643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:56:12.407028  278643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:56:12.410729  278643 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:12.410840  278643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:56:12.445065  278643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:56:12.445213  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.519754  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.507997479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.519868  278643 docker.go:319] overlay module found
	I1206 09:56:12.523177  278643 out.go:179] * Using the docker driver based on user configuration
	I1206 09:56:12.526466  278643 start.go:309] selected driver: docker
	I1206 09:56:12.526501  278643 start.go:927] validating driver "docker" against <nil>
	I1206 09:56:12.526518  278643 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:56:12.527486  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.593335  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.584358845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.593500  278643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 09:56:12.593524  278643 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 09:56:12.593752  278643 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:56:12.596647  278643 out.go:179] * Using Docker driver with root privileges
	I1206 09:56:12.599543  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:12.599621  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:12.599637  278643 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:56:12.599733  278643 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:12.602953  278643 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 09:56:12.605789  278643 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 09:56:12.608936  278643 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:56:12.611867  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:12.611918  278643 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 09:56:12.611946  278643 cache.go:65] Caching tarball of preloaded images
	I1206 09:56:12.611951  278643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:56:12.612037  278643 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 09:56:12.612047  278643 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 09:56:12.612154  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:12.612171  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json: {Name:mk449f962f0653f31dbbb03aed6f74703a91443a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:12.631940  278643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:56:12.631967  278643 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:56:12.631982  278643 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:56:12.632013  278643 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:56:12.632117  278643 start.go:364] duration metric: took 83.89µs to acquireMachinesLock for "newest-cni-387337"
	I1206 09:56:12.632148  278643 start.go:93] Provisioning new machine with config: &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:56:12.632223  278643 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:56:12.635711  278643 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:56:12.635957  278643 start.go:159] libmachine.API.Create for "newest-cni-387337" (driver="docker")
	I1206 09:56:12.635999  278643 client.go:173] LocalClient.Create starting
	I1206 09:56:12.636069  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 09:56:12.636109  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636134  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636197  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 09:56:12.636218  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636234  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636615  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:56:12.654202  278643 cli_runner.go:211] docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:56:12.654286  278643 network_create.go:284] running [docker network inspect newest-cni-387337] to gather additional debugging logs...
	I1206 09:56:12.654307  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337
	W1206 09:56:12.674169  278643 cli_runner.go:211] docker network inspect newest-cni-387337 returned with exit code 1
	I1206 09:56:12.674197  278643 network_create.go:287] error running [docker network inspect newest-cni-387337]: docker network inspect newest-cni-387337: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-387337 not found
	I1206 09:56:12.674213  278643 network_create.go:289] output of [docker network inspect newest-cni-387337]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-387337 not found
	
	** /stderr **
	I1206 09:56:12.674320  278643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:12.697162  278643 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
	I1206 09:56:12.697876  278643 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6479799cc46a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:b3:f8:bd:10:a1} reservation:<nil>}
	I1206 09:56:12.698630  278643 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-045bb1cdddf9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:c6:f0:a4:f5:8d} reservation:<nil>}
	I1206 09:56:12.699284  278643 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b05bfbfa5536 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:01:4f:ea:ac:91} reservation:<nil>}
	I1206 09:56:12.700138  278643 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5b80}
	I1206 09:56:12.700211  278643 network_create.go:124] attempt to create docker network newest-cni-387337 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 09:56:12.700393  278643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-387337 newest-cni-387337
	I1206 09:56:12.761289  278643 network_create.go:108] docker network newest-cni-387337 192.168.85.0/24 created
	I1206 09:56:12.761339  278643 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-387337" container
	I1206 09:56:12.761412  278643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:56:12.778118  278643 cli_runner.go:164] Run: docker volume create newest-cni-387337 --label name.minikube.sigs.k8s.io=newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:56:12.796678  278643 oci.go:103] Successfully created a docker volume newest-cni-387337
	I1206 09:56:12.796763  278643 cli_runner.go:164] Run: docker run --rm --name newest-cni-387337-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --entrypoint /usr/bin/test -v newest-cni-387337:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:56:13.320572  278643 oci.go:107] Successfully prepared a docker volume newest-cni-387337
	I1206 09:56:13.320655  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:13.320668  278643 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:56:13.320746  278643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:56:17.285875  278643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.965091453s)
	I1206 09:56:17.285931  278643 kic.go:203] duration metric: took 3.965259503s to extract preloaded images to volume ...
	W1206 09:56:17.286072  278643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 09:56:17.286184  278643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:56:17.343671  278643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-387337 --name newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-387337 --network newest-cni-387337 --ip 192.168.85.2 --volume newest-cni-387337:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:56:17.667864  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Running}}
	I1206 09:56:17.689633  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.721713  278643 cli_runner.go:164] Run: docker exec newest-cni-387337 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:56:17.785391  278643 oci.go:144] the created container "newest-cni-387337" has a running status.
	I1206 09:56:17.785426  278643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa...
	I1206 09:56:17.929044  278643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:56:17.954229  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.978708  278643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:56:17.978729  278643 kic_runner.go:114] Args: [docker exec --privileged newest-cni-387337 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:56:18.030854  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:18.061034  278643 machine.go:94] provisionDockerMachine start ...
	I1206 09:56:18.061129  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:18.105041  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:18.105395  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:18.105412  278643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:56:18.106117  278643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39934->127.0.0.1:33093: read: connection reset by peer
	I1206 09:56:21.259644  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.259668  278643 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 09:56:21.259730  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.280819  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.281151  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.281167  278643 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 09:56:21.446750  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.446840  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.466708  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.467034  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.467060  278643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:56:21.636152  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:56:21.636184  278643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 09:56:21.636206  278643 ubuntu.go:190] setting up certificates
	I1206 09:56:21.636216  278643 provision.go:84] configureAuth start
	I1206 09:56:21.636276  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:21.657085  278643 provision.go:143] copyHostCerts
	I1206 09:56:21.657167  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 09:56:21.657182  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 09:56:21.657287  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 09:56:21.657399  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 09:56:21.657409  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 09:56:21.657439  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 09:56:21.657519  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 09:56:21.657530  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 09:56:21.657556  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 09:56:21.657626  278643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 09:56:22.235324  278643 provision.go:177] copyRemoteCerts
	I1206 09:56:22.235498  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:56:22.235563  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.254382  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.371978  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:56:22.391750  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:56:22.409835  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 09:56:22.427840  278643 provision.go:87] duration metric: took 791.601956ms to configureAuth
	I1206 09:56:22.427871  278643 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:56:22.428075  278643 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:22.428086  278643 machine.go:97] duration metric: took 4.367032221s to provisionDockerMachine
	I1206 09:56:22.428093  278643 client.go:176] duration metric: took 9.792082753s to LocalClient.Create
	I1206 09:56:22.428116  278643 start.go:167] duration metric: took 9.792160612s to libmachine.API.Create "newest-cni-387337"
	I1206 09:56:22.428128  278643 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 09:56:22.428139  278643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:56:22.428194  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:56:22.428238  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.445246  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.552047  278643 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:56:22.555602  278643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:56:22.555631  278643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:56:22.555643  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 09:56:22.555699  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 09:56:22.555780  278643 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 09:56:22.555887  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:56:22.563581  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:22.582270  278643 start.go:296] duration metric: took 154.127995ms for postStartSetup
	I1206 09:56:22.582688  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.600191  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:22.600480  278643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:56:22.600532  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.618476  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.721461  278643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:56:22.732754  278643 start.go:128] duration metric: took 10.100506966s to createHost
	I1206 09:56:22.732791  278643 start.go:83] releasing machines lock for "newest-cni-387337", held for 10.100657655s
	I1206 09:56:22.732898  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.752253  278643 ssh_runner.go:195] Run: cat /version.json
	I1206 09:56:22.752314  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.752332  278643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:56:22.752395  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.774900  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.786887  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.981230  278643 ssh_runner.go:195] Run: systemctl --version
	I1206 09:56:22.988594  278643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:56:22.993872  278643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:56:22.993970  278643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:56:23.036477  278643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 09:56:23.036554  278643 start.go:496] detecting cgroup driver to use...
	I1206 09:56:23.036604  278643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 09:56:23.036691  278643 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:56:23.053535  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:56:23.068263  278643 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:56:23.068359  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:56:23.086894  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:56:23.106796  278643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:56:23.229113  278643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:56:23.353681  278643 docker.go:234] disabling docker service ...
	I1206 09:56:23.353777  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:56:23.376315  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:56:23.389550  278643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:56:23.511242  278643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:56:23.632737  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:56:23.646096  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:56:23.661684  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:56:23.671182  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:56:23.680434  278643 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:56:23.680559  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:56:23.689627  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.698546  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:56:23.707890  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.719929  278643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:56:23.733633  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:56:23.743339  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:56:23.753107  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:56:23.763042  278643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:56:23.772383  278643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:56:23.783215  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:23.897379  278643 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:56:24.034106  278643 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:56:24.034227  278643 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:56:24.038555  278643 start.go:564] Will wait 60s for crictl version
	I1206 09:56:24.038667  278643 ssh_runner.go:195] Run: which crictl
	I1206 09:56:24.042893  278643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:56:24.073212  278643 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 09:56:24.073340  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.100352  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.125479  278643 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 09:56:24.128585  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:24.145134  278643 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:56:24.149083  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.161762  278643 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 09:56:24.164661  278643 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:56:24.164804  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:24.164892  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.190128  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.190154  278643 containerd.go:534] Images already preloaded, skipping extraction
	I1206 09:56:24.190214  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.214192  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.214220  278643 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:56:24.214229  278643 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 09:56:24.214329  278643 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:56:24.214400  278643 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:56:24.241654  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:24.241679  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:24.241702  278643 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 09:56:24.241726  278643 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:56:24.241847  278643 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:56:24.241920  278643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:56:24.250168  278643 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:56:24.250236  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:56:24.259935  278643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 09:56:24.273892  278643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:56:24.288011  278643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 09:56:24.300649  278643 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:56:24.304319  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.314437  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:24.421252  278643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:56:24.437400  278643 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 09:56:24.437465  278643 certs.go:195] generating shared ca certs ...
	I1206 09:56:24.437496  278643 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.437676  278643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 09:56:24.437744  278643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 09:56:24.437767  278643 certs.go:257] generating profile certs ...
	I1206 09:56:24.437853  278643 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 09:56:24.437892  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt with IP's: []
	I1206 09:56:24.906874  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt ...
	I1206 09:56:24.906907  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt: {Name:mk3786951ca6b934a39ce0b897be0476ac498386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907112  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key ...
	I1206 09:56:24.907126  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key: {Name:mk400b28e78f0247222772118d8e6e5e81e847c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907230  278643 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 09:56:24.907249  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 09:56:25.112458  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd ...
	I1206 09:56:25.112494  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd: {Name:mk0b66241f430a839566e8733856f4f7778dd203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.112675  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd ...
	I1206 09:56:25.113433  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd: {Name:mk545f1d084e139bf8c177372caec577367d5287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.113573  278643 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt
	I1206 09:56:25.113667  278643 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key
	I1206 09:56:25.113729  278643 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 09:56:25.113755  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt with IP's: []
	I1206 09:56:25.390925  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt ...
	I1206 09:56:25.390958  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt: {Name:mkf533c4c7795dfadd5e4919382846ec6f68f803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391162  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key ...
	I1206 09:56:25.391180  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key: {Name:mk080dba3e2186a2cc27fdce20eb9b0d79705a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391368  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 09:56:25.391429  278643 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 09:56:25.391438  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:56:25.391466  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 09:56:25.391497  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:56:25.391527  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 09:56:25.391576  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:25.392167  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:56:25.411617  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:56:25.431093  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:56:25.449697  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:56:25.468550  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:56:25.487105  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:56:25.505768  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:56:25.525108  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:56:25.543500  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:56:25.562465  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 09:56:25.580776  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 09:56:25.598408  278643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:56:25.612387  278643 ssh_runner.go:195] Run: openssl version
	I1206 09:56:25.618822  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.626357  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:56:25.633933  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637838  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637908  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.679350  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:56:25.686883  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:56:25.694288  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.701929  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 09:56:25.709757  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.713960  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.714081  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.755271  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:56:25.762807  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 09:56:25.770247  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.777748  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 09:56:25.785439  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789191  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789278  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.830268  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.837948  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.845509  278643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:56:25.849323  278643 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:56:25.849426  278643 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:25.849528  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:56:25.849591  278643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:56:25.875432  278643 cri.go:89] found id: ""
	I1206 09:56:25.875532  278643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:56:25.883715  278643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:56:25.891695  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:25.891813  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:25.899921  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:25.899941  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:25.900032  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:25.908195  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:25.908312  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:25.916060  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:25.924068  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:25.924164  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:25.931858  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.939818  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:25.939921  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.948338  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:25.956152  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:25.956247  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:25.963660  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:26.031399  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:26.031465  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:26.131684  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:26.131760  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:26.131802  278643 kubeadm.go:319] OS: Linux
	I1206 09:56:26.131854  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:26.131907  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:26.131958  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:26.132009  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:26.132062  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:26.132114  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:26.132163  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:26.132215  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:26.132262  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:26.203361  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:26.203507  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:26.203605  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:26.210048  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:26.216410  278643 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:26.216591  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:26.216714  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:26.398131  278643 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:26.614015  278643 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:27.159843  278643 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:27.364968  278643 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:27.669555  278643 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:27.669750  278643 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.021664  278643 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:28.022031  278643 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.806854  278643 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:29.101949  278643 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:29.804533  278643 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:29.804903  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:30.341296  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:30.816858  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:30.960618  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:31.211332  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:31.505498  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:31.506301  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:31.509226  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:31.513473  278643 out.go:252]   - Booting up control plane ...
	I1206 09:56:31.513588  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:31.513674  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:31.513746  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:31.531878  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:31.532005  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:31.540494  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:31.540946  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:31.541222  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:31.688292  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:31.688412  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:56:52.131955  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 09:56:52.131990  265222 kubeadm.go:319] 
	I1206 09:56:52.132057  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:56:52.135086  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:52.135149  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:52.135269  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:52.135335  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:52.135398  265222 kubeadm.go:319] OS: Linux
	I1206 09:56:52.135462  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:52.135528  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:52.135580  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:52.135635  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:52.135687  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:52.135753  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:52.135820  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:52.135888  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:52.135938  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:52.136021  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:52.136130  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:52.136253  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:52.136339  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:52.141711  265222 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:52.141840  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:52.141916  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:52.141987  265222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:52.142053  265222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:52.142117  265222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:52.142167  265222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:52.142231  265222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:52.142358  265222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142411  265222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:52.142534  265222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142602  265222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:52.142665  265222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:52.142714  265222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:52.142774  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:52.142827  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:52.142886  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:52.142942  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:52.143007  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:52.143067  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:52.143146  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:52.143212  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:52.146063  265222 out.go:252]   - Booting up control plane ...
	I1206 09:56:52.146187  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:52.146272  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:52.146343  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:52.146451  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:52.146548  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:52.146656  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:52.146744  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:52.146786  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:52.146923  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:52.147038  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:56:52.147107  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000059514s
	I1206 09:56:52.147115  265222 kubeadm.go:319] 
	I1206 09:56:52.147172  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:56:52.147209  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:56:52.147316  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:56:52.147324  265222 kubeadm.go:319] 
	I1206 09:56:52.147528  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:56:52.147567  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:56:52.147602  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1206 09:56:52.147720  265222 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000059514s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:56:52.147812  265222 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:56:52.147999  265222 kubeadm.go:319] 
	I1206 09:56:52.558950  265222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:56:52.574085  265222 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:52.574157  265222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:52.583215  265222 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:52.583237  265222 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:52.583290  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:52.592240  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:52.592330  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:52.601081  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:52.609915  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:52.609987  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:52.618677  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.627409  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:52.627476  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.635636  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:52.644224  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:52.644339  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:52.652667  265222 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:52.772328  265222 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:56:52.772790  265222 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:56:52.844974  265222 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:31.687178  278643 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000277363s
	I1206 10:00:31.687206  278643 kubeadm.go:319] 
	I1206 10:00:31.687552  278643 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:31.687635  278643 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:31.687823  278643 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:31.687982  278643 kubeadm.go:319] 
	I1206 10:00:31.688177  278643 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:31.688245  278643 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:31.688306  278643 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:31.688315  278643 kubeadm.go:319] 
	I1206 10:00:31.693600  278643 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:00:31.694063  278643 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:00:31.694183  278643 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:31.694443  278643 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:31.694449  278643 kubeadm.go:319] 
	I1206 10:00:31.694518  278643 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:31.694644  278643 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277363s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:00:31.694791  278643 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:00:32.113107  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:00:32.127511  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:00:32.127586  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:00:32.136075  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:00:32.136153  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 10:00:32.136236  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:00:32.144649  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:00:32.144725  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:00:32.152703  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:00:32.160876  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:00:32.160972  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:00:32.168760  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.176761  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:00:32.176847  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.184483  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:00:32.192491  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:00:32.192587  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:00:32.200531  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:00:32.244871  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:32.244955  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:32.319347  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:32.319474  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:32.319533  278643 kubeadm.go:319] OS: Linux
	I1206 10:00:32.319600  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:32.319668  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:32.319735  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:32.319804  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:32.319871  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:32.319938  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:32.320001  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:32.320072  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:32.320138  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:32.391588  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:32.391743  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:32.391866  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:32.399952  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:32.402951  278643 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:32.403119  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:32.403230  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:32.403363  278643 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:32.403527  278643 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:32.403636  278643 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:32.403722  278643 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:32.403826  278643 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:32.403924  278643 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:32.404038  278643 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:32.404152  278643 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:32.404224  278643 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:32.404311  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:32.618555  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:32.763900  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:33.042172  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:33.120040  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:33.316584  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:33.317337  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:33.321890  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:33.325150  278643 out.go:252]   - Booting up control plane ...
	I1206 10:00:33.325263  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:33.325348  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:33.327141  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:33.349232  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:33.349348  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:33.357825  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:33.358422  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:33.358496  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:33.491546  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:33.491691  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:53.947913  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:53.947946  265222 kubeadm.go:319] 
	I1206 10:00:53.948017  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:00:53.951147  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:53.951214  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:53.951335  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:53.951432  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:53.951494  265222 kubeadm.go:319] OS: Linux
	I1206 10:00:53.951590  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:53.951656  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:53.951713  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:53.951772  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:53.951824  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:53.951883  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:53.951934  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:53.952003  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:53.952063  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:53.952142  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:53.952242  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:53.952340  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:53.952409  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:53.955466  265222 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:53.955575  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:53.955645  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:53.955732  265222 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:53.955795  265222 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:53.955896  265222 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:53.955964  265222 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:53.956029  265222 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:53.956091  265222 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:53.956171  265222 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:53.956285  265222 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:53.956334  265222 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:53.956402  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:53.956467  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:53.956544  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:53.956617  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:53.956688  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:53.956748  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:53.956848  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:53.956936  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:53.961787  265222 out.go:252]   - Booting up control plane ...
	I1206 10:00:53.961906  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:53.961995  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:53.962068  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:53.962176  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:53.962277  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:53.962386  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:53.962474  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:53.962516  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:53.962650  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:53.962758  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:53.962827  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001252323s
	I1206 10:00:53.962835  265222 kubeadm.go:319] 
	I1206 10:00:53.962892  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:53.962934  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:53.963049  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:53.963060  265222 kubeadm.go:319] 
	I1206 10:00:53.963164  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:53.963200  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:53.963233  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:53.963298  265222 kubeadm.go:403] duration metric: took 8m6.382652277s to StartCluster
	I1206 10:00:53.963352  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:00:53.963521  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:00:53.963616  265222 kubeadm.go:319] 
	I1206 10:00:53.989223  265222 cri.go:89] found id: ""
	I1206 10:00:53.989249  265222 logs.go:282] 0 containers: []
	W1206 10:00:53.989258  265222 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:00:53.989265  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:00:53.989329  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:00:54.028962  265222 cri.go:89] found id: ""
	I1206 10:00:54.029000  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.029010  265222 logs.go:284] No container was found matching "etcd"
	I1206 10:00:54.029026  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:00:54.029137  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:00:54.055727  265222 cri.go:89] found id: ""
	I1206 10:00:54.055751  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.055760  265222 logs.go:284] No container was found matching "coredns"
	I1206 10:00:54.055766  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:00:54.055826  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:00:54.086040  265222 cri.go:89] found id: ""
	I1206 10:00:54.086066  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.086080  265222 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:00:54.086088  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:00:54.086232  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:00:54.112094  265222 cri.go:89] found id: ""
	I1206 10:00:54.112119  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.112127  265222 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:00:54.112134  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:00:54.112192  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:00:54.141768  265222 cri.go:89] found id: ""
	I1206 10:00:54.141793  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.141802  265222 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:00:54.141808  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:00:54.141867  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:00:54.168313  265222 cri.go:89] found id: ""
	I1206 10:00:54.168338  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.168347  265222 logs.go:284] No container was found matching "kindnet"
	I1206 10:00:54.168357  265222 logs.go:123] Gathering logs for kubelet ...
	I1206 10:00:54.168368  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:00:54.224543  265222 logs.go:123] Gathering logs for dmesg ...
	I1206 10:00:54.224578  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:00:54.238829  265222 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:00:54.238859  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:00:54.301151  265222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:00:54.301174  265222 logs.go:123] Gathering logs for containerd ...
	I1206 10:00:54.301185  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:00:54.345045  265222 logs.go:123] Gathering logs for container status ...
	I1206 10:00:54.345077  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:00:54.376879  265222 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:54.376928  265222 out.go:285] * 
	W1206 10:00:54.376993  265222 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.377007  265222 out.go:285] * 
	W1206 10:00:54.379146  265222 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:00:54.386374  265222 out.go:203] 
	W1206 10:00:54.389309  265222 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.389364  265222 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:00:54.389414  265222 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:00:54.392565  265222 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 09:52:38 no-preload-257359 containerd[759]: time="2025-12-06T09:52:38.191823536Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.275178603Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.277620284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.285785629Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.304007334Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.343348725Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.345594679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.355341259Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.356240418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.412021767Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.415665946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.424382780Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.425219514Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.916947622Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.919648694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.927607211Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.928479671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.049537462Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.052567103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.061652310Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.067188454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.436051059Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.438287839Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.445166461Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.446411675Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:00:57.192067    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:57.192916    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:57.194542    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:57.195135    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:57.196762    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:00:57 up  1:43,  0 user,  load average: 0.34, 1.47, 2.14
	Linux no-preload-257359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:00:53 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:54 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 10:00:54 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:54 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:54 no-preload-257359 kubelet[5434]: E1206 10:00:54.608161    5434 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:54 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:54 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:55 no-preload-257359 kubelet[5462]: E1206 10:00:55.306984    5462 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:55 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:55 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:56 no-preload-257359 kubelet[5561]: E1206 10:00:56.060186    5561 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 06 10:00:56 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:56 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:56 no-preload-257359 kubelet[5589]: E1206 10:00:56.822006    5589 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359: exit status 6 (366.70859ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:00:57.684376  285224 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-257359" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-257359
helpers_test.go:243: (dbg) docker inspect no-preload-257359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	        "Created": "2025-12-06T09:52:27.333376101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:27.474519381Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hostname",
	        "HostsPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hosts",
	        "LogPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26-json.log",
	        "Name": "/no-preload-257359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-257359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-257359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	                "LowerDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-257359",
	                "Source": "/var/lib/docker/volumes/no-preload-257359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-257359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-257359",
	                "name.minikube.sigs.k8s.io": "no-preload-257359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9be8b5c820dd4c3fe37c75e77303bf5032a3f74d4c68aab4997b8f54cdf3a70",
	            "SandboxKey": "/var/run/docker/netns/b9be8b5c820d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-257359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:a5:2f:79:60:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b05bfbfa55363c82b2c20e75689dc6d905b9177d9ed6efb1bc4c663e65903cf4",
	                    "EndpointID": "37f42c3d2ab503584211eef52439f3c17e372039f5b35f15d09e7f8a0c022b40",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-257359",
	                        "76494ba86a40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359: exit status 6 (331.267402ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:00:58.032316  285313 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-257359 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p embed-certs-100767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p embed-certs-100767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ image   │ old-k8s-version-587884 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ pause   │ -p old-k8s-version-587884 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ unpause │ -p old-k8s-version-587884 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-507319                                                                                                                                                                                                                            │ disable-driver-mounts-507319 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ embed-certs-100767 image list --format=json                                                                                                                                                                                                                │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:56:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:56:12.381215  278643 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:56:12.381413  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381441  278643 out.go:374] Setting ErrFile to fd 2...
	I1206 09:56:12.381461  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381758  278643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:56:12.382257  278643 out.go:368] Setting JSON to false
	I1206 09:56:12.383240  278643 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5924,"bootTime":1765009049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:56:12.383355  278643 start.go:143] virtualization:  
	I1206 09:56:12.387258  278643 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:56:12.391484  278643 notify.go:221] Checking for updates...
	I1206 09:56:12.391496  278643 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:56:12.394851  278643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:56:12.398015  278643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:56:12.400990  278643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:56:12.403944  278643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:56:12.407028  278643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:56:12.410729  278643 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:12.410840  278643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:56:12.445065  278643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:56:12.445213  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.519754  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.507997479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.519868  278643 docker.go:319] overlay module found
	I1206 09:56:12.523177  278643 out.go:179] * Using the docker driver based on user configuration
	I1206 09:56:12.526466  278643 start.go:309] selected driver: docker
	I1206 09:56:12.526501  278643 start.go:927] validating driver "docker" against <nil>
	I1206 09:56:12.526518  278643 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:56:12.527486  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.593335  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.584358845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.593500  278643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 09:56:12.593524  278643 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 09:56:12.593752  278643 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:56:12.596647  278643 out.go:179] * Using Docker driver with root privileges
	I1206 09:56:12.599543  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:12.599621  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:12.599637  278643 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:56:12.599733  278643 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:12.602953  278643 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 09:56:12.605789  278643 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 09:56:12.608936  278643 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:56:12.611867  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:12.611918  278643 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 09:56:12.611946  278643 cache.go:65] Caching tarball of preloaded images
	I1206 09:56:12.611951  278643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:56:12.612037  278643 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 09:56:12.612047  278643 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 09:56:12.612154  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:12.612171  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json: {Name:mk449f962f0653f31dbbb03aed6f74703a91443a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:12.631940  278643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:56:12.631967  278643 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:56:12.631982  278643 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:56:12.632013  278643 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:56:12.632117  278643 start.go:364] duration metric: took 83.89µs to acquireMachinesLock for "newest-cni-387337"
	I1206 09:56:12.632148  278643 start.go:93] Provisioning new machine with config: &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:56:12.632223  278643 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:56:12.635711  278643 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:56:12.635957  278643 start.go:159] libmachine.API.Create for "newest-cni-387337" (driver="docker")
	I1206 09:56:12.635999  278643 client.go:173] LocalClient.Create starting
	I1206 09:56:12.636069  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 09:56:12.636109  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636134  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636197  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 09:56:12.636218  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636234  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636615  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:56:12.654202  278643 cli_runner.go:211] docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:56:12.654286  278643 network_create.go:284] running [docker network inspect newest-cni-387337] to gather additional debugging logs...
	I1206 09:56:12.654307  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337
	W1206 09:56:12.674169  278643 cli_runner.go:211] docker network inspect newest-cni-387337 returned with exit code 1
	I1206 09:56:12.674197  278643 network_create.go:287] error running [docker network inspect newest-cni-387337]: docker network inspect newest-cni-387337: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-387337 not found
	I1206 09:56:12.674213  278643 network_create.go:289] output of [docker network inspect newest-cni-387337]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-387337 not found
	
	** /stderr **
	I1206 09:56:12.674320  278643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:12.697162  278643 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
	I1206 09:56:12.697876  278643 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6479799cc46a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:b3:f8:bd:10:a1} reservation:<nil>}
	I1206 09:56:12.698630  278643 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-045bb1cdddf9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:c6:f0:a4:f5:8d} reservation:<nil>}
	I1206 09:56:12.699284  278643 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b05bfbfa5536 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:01:4f:ea:ac:91} reservation:<nil>}
	I1206 09:56:12.700138  278643 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5b80}
	I1206 09:56:12.700211  278643 network_create.go:124] attempt to create docker network newest-cni-387337 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 09:56:12.700393  278643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-387337 newest-cni-387337
	I1206 09:56:12.761289  278643 network_create.go:108] docker network newest-cni-387337 192.168.85.0/24 created
	I1206 09:56:12.761339  278643 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-387337" container
	I1206 09:56:12.761412  278643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:56:12.778118  278643 cli_runner.go:164] Run: docker volume create newest-cni-387337 --label name.minikube.sigs.k8s.io=newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:56:12.796678  278643 oci.go:103] Successfully created a docker volume newest-cni-387337
	I1206 09:56:12.796763  278643 cli_runner.go:164] Run: docker run --rm --name newest-cni-387337-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --entrypoint /usr/bin/test -v newest-cni-387337:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:56:13.320572  278643 oci.go:107] Successfully prepared a docker volume newest-cni-387337
	I1206 09:56:13.320655  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:13.320668  278643 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:56:13.320746  278643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:56:17.285875  278643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.965091453s)
	I1206 09:56:17.285931  278643 kic.go:203] duration metric: took 3.965259503s to extract preloaded images to volume ...
	W1206 09:56:17.286072  278643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 09:56:17.286184  278643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:56:17.343671  278643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-387337 --name newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-387337 --network newest-cni-387337 --ip 192.168.85.2 --volume newest-cni-387337:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:56:17.667864  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Running}}
	I1206 09:56:17.689633  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.721713  278643 cli_runner.go:164] Run: docker exec newest-cni-387337 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:56:17.785391  278643 oci.go:144] the created container "newest-cni-387337" has a running status.
	I1206 09:56:17.785426  278643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa...
	I1206 09:56:17.929044  278643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:56:17.954229  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.978708  278643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:56:17.978729  278643 kic_runner.go:114] Args: [docker exec --privileged newest-cni-387337 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:56:18.030854  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:18.061034  278643 machine.go:94] provisionDockerMachine start ...
	I1206 09:56:18.061129  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:18.105041  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:18.105395  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:18.105412  278643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:56:18.106117  278643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39934->127.0.0.1:33093: read: connection reset by peer
	I1206 09:56:21.259644  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.259668  278643 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 09:56:21.259730  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.280819  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.281151  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.281167  278643 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 09:56:21.446750  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.446840  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.466708  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.467034  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.467060  278643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:56:21.636152  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:56:21.636184  278643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 09:56:21.636206  278643 ubuntu.go:190] setting up certificates
	I1206 09:56:21.636216  278643 provision.go:84] configureAuth start
	I1206 09:56:21.636276  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:21.657085  278643 provision.go:143] copyHostCerts
	I1206 09:56:21.657167  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 09:56:21.657182  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 09:56:21.657287  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 09:56:21.657399  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 09:56:21.657409  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 09:56:21.657439  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 09:56:21.657519  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 09:56:21.657530  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 09:56:21.657556  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 09:56:21.657626  278643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 09:56:22.235324  278643 provision.go:177] copyRemoteCerts
	I1206 09:56:22.235498  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:56:22.235563  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.254382  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.371978  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:56:22.391750  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:56:22.409835  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 09:56:22.427840  278643 provision.go:87] duration metric: took 791.601956ms to configureAuth
	I1206 09:56:22.427871  278643 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:56:22.428075  278643 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:22.428086  278643 machine.go:97] duration metric: took 4.367032221s to provisionDockerMachine
	I1206 09:56:22.428093  278643 client.go:176] duration metric: took 9.792082753s to LocalClient.Create
	I1206 09:56:22.428116  278643 start.go:167] duration metric: took 9.792160612s to libmachine.API.Create "newest-cni-387337"
	I1206 09:56:22.428128  278643 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 09:56:22.428139  278643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:56:22.428194  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:56:22.428238  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.445246  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.552047  278643 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:56:22.555602  278643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:56:22.555631  278643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:56:22.555643  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 09:56:22.555699  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 09:56:22.555780  278643 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 09:56:22.555887  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:56:22.563581  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:22.582270  278643 start.go:296] duration metric: took 154.127995ms for postStartSetup
	I1206 09:56:22.582688  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.600191  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:22.600480  278643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:56:22.600532  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.618476  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.721461  278643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:56:22.732754  278643 start.go:128] duration metric: took 10.100506966s to createHost
	I1206 09:56:22.732791  278643 start.go:83] releasing machines lock for "newest-cni-387337", held for 10.100657655s
	I1206 09:56:22.732898  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.752253  278643 ssh_runner.go:195] Run: cat /version.json
	I1206 09:56:22.752314  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.752332  278643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:56:22.752395  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.774900  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.786887  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.981230  278643 ssh_runner.go:195] Run: systemctl --version
	I1206 09:56:22.988594  278643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:56:22.993872  278643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:56:22.993970  278643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:56:23.036477  278643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 09:56:23.036554  278643 start.go:496] detecting cgroup driver to use...
	I1206 09:56:23.036604  278643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 09:56:23.036691  278643 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:56:23.053535  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:56:23.068263  278643 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:56:23.068359  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:56:23.086894  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:56:23.106796  278643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:56:23.229113  278643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:56:23.353681  278643 docker.go:234] disabling docker service ...
	I1206 09:56:23.353777  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:56:23.376315  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:56:23.389550  278643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:56:23.511242  278643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:56:23.632737  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:56:23.646096  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:56:23.661684  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:56:23.671182  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:56:23.680434  278643 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:56:23.680559  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:56:23.689627  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.698546  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:56:23.707890  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.719929  278643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:56:23.733633  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:56:23.743339  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:56:23.753107  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:56:23.763042  278643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:56:23.772383  278643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:56:23.783215  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:23.897379  278643 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:56:24.034106  278643 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:56:24.034227  278643 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:56:24.038555  278643 start.go:564] Will wait 60s for crictl version
	I1206 09:56:24.038667  278643 ssh_runner.go:195] Run: which crictl
	I1206 09:56:24.042893  278643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:56:24.073212  278643 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 09:56:24.073340  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.100352  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.125479  278643 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 09:56:24.128585  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:24.145134  278643 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:56:24.149083  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.161762  278643 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 09:56:24.164661  278643 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:56:24.164804  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:24.164892  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.190128  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.190154  278643 containerd.go:534] Images already preloaded, skipping extraction
	I1206 09:56:24.190214  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.214192  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.214220  278643 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:56:24.214229  278643 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 09:56:24.214329  278643 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:56:24.214400  278643 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:56:24.241654  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:24.241679  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:24.241702  278643 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 09:56:24.241726  278643 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:56:24.241847  278643 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:56:24.241920  278643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:56:24.250168  278643 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:56:24.250236  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:56:24.259935  278643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 09:56:24.273892  278643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:56:24.288011  278643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 09:56:24.300649  278643 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:56:24.304319  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.314437  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:24.421252  278643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:56:24.437400  278643 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 09:56:24.437465  278643 certs.go:195] generating shared ca certs ...
	I1206 09:56:24.437496  278643 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.437676  278643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 09:56:24.437744  278643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 09:56:24.437767  278643 certs.go:257] generating profile certs ...
	I1206 09:56:24.437853  278643 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 09:56:24.437892  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt with IP's: []
	I1206 09:56:24.906874  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt ...
	I1206 09:56:24.906907  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt: {Name:mk3786951ca6b934a39ce0b897be0476ac498386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907112  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key ...
	I1206 09:56:24.907126  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key: {Name:mk400b28e78f0247222772118d8e6e5e81e847c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907230  278643 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 09:56:24.907249  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 09:56:25.112458  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd ...
	I1206 09:56:25.112494  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd: {Name:mk0b66241f430a839566e8733856f4f7778dd203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.112675  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd ...
	I1206 09:56:25.113433  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd: {Name:mk545f1d084e139bf8c177372caec577367d5287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.113573  278643 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt
	I1206 09:56:25.113667  278643 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key
	I1206 09:56:25.113729  278643 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 09:56:25.113755  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt with IP's: []
	I1206 09:56:25.390925  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt ...
	I1206 09:56:25.390958  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt: {Name:mkf533c4c7795dfadd5e4919382846ec6f68f803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391162  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key ...
	I1206 09:56:25.391180  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key: {Name:mk080dba3e2186a2cc27fdce20eb9b0d79705a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391368  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 09:56:25.391429  278643 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 09:56:25.391438  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:56:25.391466  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 09:56:25.391497  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:56:25.391527  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 09:56:25.391576  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:25.392167  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:56:25.411617  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:56:25.431093  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:56:25.449697  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:56:25.468550  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:56:25.487105  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:56:25.505768  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:56:25.525108  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:56:25.543500  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:56:25.562465  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 09:56:25.580776  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 09:56:25.598408  278643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:56:25.612387  278643 ssh_runner.go:195] Run: openssl version
	I1206 09:56:25.618822  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.626357  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:56:25.633933  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637838  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637908  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.679350  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:56:25.686883  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:56:25.694288  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.701929  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 09:56:25.709757  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.713960  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.714081  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.755271  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:56:25.762807  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 09:56:25.770247  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.777748  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 09:56:25.785439  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789191  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789278  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.830268  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.837948  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.845509  278643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:56:25.849323  278643 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:56:25.849426  278643 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:25.849528  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:56:25.849591  278643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:56:25.875432  278643 cri.go:89] found id: ""
	I1206 09:56:25.875532  278643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:56:25.883715  278643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:56:25.891695  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:25.891813  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:25.899921  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:25.899941  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:25.900032  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:25.908195  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:25.908312  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:25.916060  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:25.924068  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:25.924164  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:25.931858  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.939818  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:25.939921  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.948338  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:25.956152  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:25.956247  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:25.963660  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:26.031399  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:26.031465  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:26.131684  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:26.131760  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:26.131802  278643 kubeadm.go:319] OS: Linux
	I1206 09:56:26.131854  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:26.131907  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:26.131958  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:26.132009  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:26.132062  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:26.132114  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:26.132163  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:26.132215  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:26.132262  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:26.203361  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:26.203507  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:26.203605  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:26.210048  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:26.216410  278643 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:26.216591  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:26.216714  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:26.398131  278643 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:26.614015  278643 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:27.159843  278643 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:27.364968  278643 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:27.669555  278643 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:27.669750  278643 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.021664  278643 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:28.022031  278643 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.806854  278643 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:29.101949  278643 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:29.804533  278643 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:29.804903  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:30.341296  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:30.816858  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:30.960618  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:31.211332  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:31.505498  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:31.506301  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:31.509226  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:31.513473  278643 out.go:252]   - Booting up control plane ...
	I1206 09:56:31.513588  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:31.513674  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:31.513746  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:31.531878  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:31.532005  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:31.540494  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:31.540946  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:31.541222  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:31.688292  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:31.688412  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:56:52.131955  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 09:56:52.131990  265222 kubeadm.go:319] 
	I1206 09:56:52.132057  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:56:52.135086  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:52.135149  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:52.135269  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:52.135335  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:52.135398  265222 kubeadm.go:319] OS: Linux
	I1206 09:56:52.135462  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:52.135528  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:52.135580  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:52.135635  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:52.135687  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:52.135753  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:52.135820  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:52.135888  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:52.135938  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:52.136021  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:52.136130  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:52.136253  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:52.136339  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:52.141711  265222 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:52.141840  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:52.141916  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:52.141987  265222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:52.142053  265222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:52.142117  265222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:52.142167  265222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:52.142231  265222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:52.142358  265222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142411  265222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:52.142534  265222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142602  265222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:52.142665  265222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:52.142714  265222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:52.142774  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:52.142827  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:52.142886  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:52.142942  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:52.143007  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:52.143067  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:52.143146  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:52.143212  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:52.146063  265222 out.go:252]   - Booting up control plane ...
	I1206 09:56:52.146187  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:52.146272  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:52.146343  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:52.146451  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:52.146548  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:52.146656  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:52.146744  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:52.146786  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:52.146923  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:52.147038  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:56:52.147107  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000059514s
	I1206 09:56:52.147115  265222 kubeadm.go:319] 
	I1206 09:56:52.147172  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:56:52.147209  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:56:52.147316  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:56:52.147324  265222 kubeadm.go:319] 
	I1206 09:56:52.147528  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:56:52.147567  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:56:52.147602  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1206 09:56:52.147720  265222 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000059514s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:56:52.147812  265222 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:56:52.147999  265222 kubeadm.go:319] 
	I1206 09:56:52.558950  265222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:56:52.574085  265222 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:52.574157  265222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:52.583215  265222 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:52.583237  265222 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:52.583290  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:52.592240  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:52.592330  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:52.601081  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:52.609915  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:52.609987  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:52.618677  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.627409  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:52.627476  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.635636  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:52.644224  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:52.644339  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:52.652667  265222 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:52.772328  265222 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:56:52.772790  265222 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:56:52.844974  265222 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:31.687178  278643 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000277363s
	I1206 10:00:31.687206  278643 kubeadm.go:319] 
	I1206 10:00:31.687552  278643 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:31.687635  278643 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:31.687823  278643 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:31.687982  278643 kubeadm.go:319] 
	I1206 10:00:31.688177  278643 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:31.688245  278643 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:31.688306  278643 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:31.688315  278643 kubeadm.go:319] 
	I1206 10:00:31.693600  278643 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:00:31.694063  278643 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:00:31.694183  278643 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:31.694443  278643 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:31.694449  278643 kubeadm.go:319] 
	I1206 10:00:31.694518  278643 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:31.694644  278643 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277363s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:00:31.694791  278643 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:00:32.113107  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:00:32.127511  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:00:32.127586  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:00:32.136075  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:00:32.136153  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 10:00:32.136236  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:00:32.144649  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:00:32.144725  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:00:32.152703  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:00:32.160876  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:00:32.160972  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:00:32.168760  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.176761  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:00:32.176847  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.184483  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:00:32.192491  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:00:32.192587  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:00:32.200531  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:00:32.244871  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:32.244955  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:32.319347  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:32.319474  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:32.319533  278643 kubeadm.go:319] OS: Linux
	I1206 10:00:32.319600  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:32.319668  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:32.319735  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:32.319804  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:32.319871  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:32.319938  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:32.320001  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:32.320072  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:32.320138  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:32.391588  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:32.391743  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:32.391866  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:32.399952  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:32.402951  278643 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:32.403119  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:32.403230  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:32.403363  278643 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:32.403527  278643 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:32.403636  278643 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:32.403722  278643 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:32.403826  278643 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:32.403924  278643 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:32.404038  278643 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:32.404152  278643 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:32.404224  278643 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:32.404311  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:32.618555  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:32.763900  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:33.042172  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:33.120040  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:33.316584  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:33.317337  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:33.321890  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:33.325150  278643 out.go:252]   - Booting up control plane ...
	I1206 10:00:33.325263  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:33.325348  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:33.327141  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:33.349232  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:33.349348  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:33.357825  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:33.358422  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:33.358496  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:33.491546  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:33.491691  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:53.947913  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:53.947946  265222 kubeadm.go:319] 
	I1206 10:00:53.948017  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:00:53.951147  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:53.951214  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:53.951335  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:53.951432  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:53.951494  265222 kubeadm.go:319] OS: Linux
	I1206 10:00:53.951590  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:53.951656  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:53.951713  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:53.951772  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:53.951824  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:53.951883  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:53.951934  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:53.952003  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:53.952063  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:53.952142  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:53.952242  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:53.952340  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:53.952409  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:53.955466  265222 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:53.955575  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:53.955645  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:53.955732  265222 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:53.955795  265222 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:53.955896  265222 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:53.955964  265222 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:53.956029  265222 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:53.956091  265222 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:53.956171  265222 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:53.956285  265222 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:53.956334  265222 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:53.956402  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:53.956467  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:53.956544  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:53.956617  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:53.956688  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:53.956748  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:53.956848  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:53.956936  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:53.961787  265222 out.go:252]   - Booting up control plane ...
	I1206 10:00:53.961906  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:53.961995  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:53.962068  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:53.962176  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:53.962277  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:53.962386  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:53.962474  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:53.962516  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:53.962650  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:53.962758  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:53.962827  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001252323s
	I1206 10:00:53.962835  265222 kubeadm.go:319] 
	I1206 10:00:53.962892  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:53.962934  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:53.963049  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:53.963060  265222 kubeadm.go:319] 
	I1206 10:00:53.963164  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:53.963200  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:53.963233  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:53.963298  265222 kubeadm.go:403] duration metric: took 8m6.382652277s to StartCluster
	I1206 10:00:53.963352  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:00:53.963521  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:00:53.963616  265222 kubeadm.go:319] 
	I1206 10:00:53.989223  265222 cri.go:89] found id: ""
	I1206 10:00:53.989249  265222 logs.go:282] 0 containers: []
	W1206 10:00:53.989258  265222 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:00:53.989265  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:00:53.989329  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:00:54.028962  265222 cri.go:89] found id: ""
	I1206 10:00:54.029000  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.029010  265222 logs.go:284] No container was found matching "etcd"
	I1206 10:00:54.029026  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:00:54.029137  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:00:54.055727  265222 cri.go:89] found id: ""
	I1206 10:00:54.055751  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.055760  265222 logs.go:284] No container was found matching "coredns"
	I1206 10:00:54.055766  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:00:54.055826  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:00:54.086040  265222 cri.go:89] found id: ""
	I1206 10:00:54.086066  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.086080  265222 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:00:54.086088  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:00:54.086232  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:00:54.112094  265222 cri.go:89] found id: ""
	I1206 10:00:54.112119  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.112127  265222 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:00:54.112134  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:00:54.112192  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:00:54.141768  265222 cri.go:89] found id: ""
	I1206 10:00:54.141793  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.141802  265222 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:00:54.141808  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:00:54.141867  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:00:54.168313  265222 cri.go:89] found id: ""
	I1206 10:00:54.168338  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.168347  265222 logs.go:284] No container was found matching "kindnet"
	I1206 10:00:54.168357  265222 logs.go:123] Gathering logs for kubelet ...
	I1206 10:00:54.168368  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:00:54.224543  265222 logs.go:123] Gathering logs for dmesg ...
	I1206 10:00:54.224578  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:00:54.238829  265222 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:00:54.238859  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:00:54.301151  265222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:00:54.301174  265222 logs.go:123] Gathering logs for containerd ...
	I1206 10:00:54.301185  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:00:54.345045  265222 logs.go:123] Gathering logs for container status ...
	I1206 10:00:54.345077  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:00:54.376879  265222 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:54.376928  265222 out.go:285] * 
	W1206 10:00:54.376993  265222 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.377007  265222 out.go:285] * 
	W1206 10:00:54.379146  265222 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:00:54.386374  265222 out.go:203] 
	W1206 10:00:54.389309  265222 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.389364  265222 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:00:54.389414  265222 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:00:54.392565  265222 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 09:52:38 no-preload-257359 containerd[759]: time="2025-12-06T09:52:38.191823536Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.275178603Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.277620284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.285785629Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.304007334Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.343348725Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.345594679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.355341259Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.356240418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.412021767Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.415665946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.424382780Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.425219514Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.916947622Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.919648694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.927607211Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.928479671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.049537462Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.052567103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.061652310Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.067188454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.436051059Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.438287839Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.445166461Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.446411675Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:00:58.717708    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:58.718562    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:58.720151    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:58.720700    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:58.722233    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:00:58 up  1:43,  0 user,  load average: 0.34, 1.47, 2.14
	Linux no-preload-257359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:00:55 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:55 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:56 no-preload-257359 kubelet[5561]: E1206 10:00:56.060186    5561 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 06 10:00:56 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:56 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:56 no-preload-257359 kubelet[5589]: E1206 10:00:56.822006    5589 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:56 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:57 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 06 10:00:57 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:57 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:57 no-preload-257359 kubelet[5686]: E1206 10:00:57.582276    5686 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:57 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:57 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:00:58 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 06 10:00:58 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:58 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:00:58 no-preload-257359 kubelet[5722]: E1206 10:00:58.323562    5722 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:00:58 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:00:58 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359: exit status 6 (361.998675ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:00:59.200849  285530 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-257359" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (109.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1206 10:01:04.755075    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:01:09.863088    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:01:36.062170    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:01:37.567172    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:02:26.677236    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m48.268492382s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-257359 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-257359 describe deploy/metrics-server -n kube-system: exit status 1 (62.078674ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-257359" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-257359 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-257359
helpers_test.go:243: (dbg) docker inspect no-preload-257359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	        "Created": "2025-12-06T09:52:27.333376101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:52:27.474519381Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hostname",
	        "HostsPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hosts",
	        "LogPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26-json.log",
	        "Name": "/no-preload-257359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-257359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-257359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	                "LowerDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-257359",
	                "Source": "/var/lib/docker/volumes/no-preload-257359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-257359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-257359",
	                "name.minikube.sigs.k8s.io": "no-preload-257359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9be8b5c820dd4c3fe37c75e77303bf5032a3f74d4c68aab4997b8f54cdf3a70",
	            "SandboxKey": "/var/run/docker/netns/b9be8b5c820d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-257359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:a5:2f:79:60:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b05bfbfa55363c82b2c20e75689dc6d905b9177d9ed6efb1bc4c663e65903cf4",
	                    "EndpointID": "37f42c3d2ab503584211eef52439f3c17e372039f5b35f15d09e7f8a0c022b40",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-257359",
	                        "76494ba86a40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359: exit status 6 (325.486789ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:02:47.880177  287426 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-257359 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-100767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:53 UTC │
	│ image   │ old-k8s-version-587884 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ pause   │ -p old-k8s-version-587884 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ unpause │ -p old-k8s-version-587884 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-507319                                                                                                                                                                                                                            │ disable-driver-mounts-507319 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ embed-certs-100767 image list --format=json                                                                                                                                                                                                                │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:00 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:56:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:56:12.381215  278643 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:56:12.381413  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381441  278643 out.go:374] Setting ErrFile to fd 2...
	I1206 09:56:12.381461  278643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:12.381758  278643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:56:12.382257  278643 out.go:368] Setting JSON to false
	I1206 09:56:12.383240  278643 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5924,"bootTime":1765009049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:56:12.383355  278643 start.go:143] virtualization:  
	I1206 09:56:12.387258  278643 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:56:12.391484  278643 notify.go:221] Checking for updates...
	I1206 09:56:12.391496  278643 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:56:12.394851  278643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:56:12.398015  278643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:56:12.400990  278643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:56:12.403944  278643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:56:12.407028  278643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:56:12.410729  278643 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:12.410840  278643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:56:12.445065  278643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:56:12.445213  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.519754  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.507997479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.519868  278643 docker.go:319] overlay module found
	I1206 09:56:12.523177  278643 out.go:179] * Using the docker driver based on user configuration
	I1206 09:56:12.526466  278643 start.go:309] selected driver: docker
	I1206 09:56:12.526501  278643 start.go:927] validating driver "docker" against <nil>
	I1206 09:56:12.526518  278643 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:56:12.527486  278643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:56:12.593335  278643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:56:12.584358845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:56:12.593500  278643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 09:56:12.593524  278643 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 09:56:12.593752  278643 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 09:56:12.596647  278643 out.go:179] * Using Docker driver with root privileges
	I1206 09:56:12.599543  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:12.599621  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:12.599637  278643 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 09:56:12.599733  278643 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:12.602953  278643 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 09:56:12.605789  278643 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 09:56:12.608936  278643 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:56:12.611867  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:12.611918  278643 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 09:56:12.611946  278643 cache.go:65] Caching tarball of preloaded images
	I1206 09:56:12.611951  278643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:56:12.612037  278643 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 09:56:12.612047  278643 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 09:56:12.612154  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:12.612171  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json: {Name:mk449f962f0653f31dbbb03aed6f74703a91443a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:12.631940  278643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 09:56:12.631967  278643 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 09:56:12.631982  278643 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:56:12.632013  278643 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:56:12.632117  278643 start.go:364] duration metric: took 83.89µs to acquireMachinesLock for "newest-cni-387337"
	I1206 09:56:12.632148  278643 start.go:93] Provisioning new machine with config: &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:56:12.632223  278643 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:56:12.635711  278643 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 09:56:12.635957  278643 start.go:159] libmachine.API.Create for "newest-cni-387337" (driver="docker")
	I1206 09:56:12.635999  278643 client.go:173] LocalClient.Create starting
	I1206 09:56:12.636069  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 09:56:12.636109  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636134  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636197  278643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 09:56:12.636218  278643 main.go:143] libmachine: Decoding PEM data...
	I1206 09:56:12.636234  278643 main.go:143] libmachine: Parsing certificate...
	I1206 09:56:12.636615  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:56:12.654202  278643 cli_runner.go:211] docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:56:12.654286  278643 network_create.go:284] running [docker network inspect newest-cni-387337] to gather additional debugging logs...
	I1206 09:56:12.654307  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337
	W1206 09:56:12.674169  278643 cli_runner.go:211] docker network inspect newest-cni-387337 returned with exit code 1
	I1206 09:56:12.674197  278643 network_create.go:287] error running [docker network inspect newest-cni-387337]: docker network inspect newest-cni-387337: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-387337 not found
	I1206 09:56:12.674213  278643 network_create.go:289] output of [docker network inspect newest-cni-387337]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-387337 not found
	
	** /stderr **
	I1206 09:56:12.674320  278643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:12.697162  278643 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
	I1206 09:56:12.697876  278643 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6479799cc46a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:b3:f8:bd:10:a1} reservation:<nil>}
	I1206 09:56:12.698630  278643 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-045bb1cdddf9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:c6:f0:a4:f5:8d} reservation:<nil>}
	I1206 09:56:12.699284  278643 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b05bfbfa5536 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:01:4f:ea:ac:91} reservation:<nil>}
	I1206 09:56:12.700138  278643 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5b80}
	I1206 09:56:12.700211  278643 network_create.go:124] attempt to create docker network newest-cni-387337 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 09:56:12.700393  278643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-387337 newest-cni-387337
	I1206 09:56:12.761289  278643 network_create.go:108] docker network newest-cni-387337 192.168.85.0/24 created
	I1206 09:56:12.761339  278643 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-387337" container
	I1206 09:56:12.761412  278643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:56:12.778118  278643 cli_runner.go:164] Run: docker volume create newest-cni-387337 --label name.minikube.sigs.k8s.io=newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:56:12.796678  278643 oci.go:103] Successfully created a docker volume newest-cni-387337
	I1206 09:56:12.796763  278643 cli_runner.go:164] Run: docker run --rm --name newest-cni-387337-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --entrypoint /usr/bin/test -v newest-cni-387337:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:56:13.320572  278643 oci.go:107] Successfully prepared a docker volume newest-cni-387337
	I1206 09:56:13.320655  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:13.320668  278643 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:56:13.320746  278643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:56:17.285875  278643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-387337:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.965091453s)
	I1206 09:56:17.285931  278643 kic.go:203] duration metric: took 3.965259503s to extract preloaded images to volume ...
	W1206 09:56:17.286072  278643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 09:56:17.286184  278643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:56:17.343671  278643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-387337 --name newest-cni-387337 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-387337 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-387337 --network newest-cni-387337 --ip 192.168.85.2 --volume newest-cni-387337:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:56:17.667864  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Running}}
	I1206 09:56:17.689633  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.721713  278643 cli_runner.go:164] Run: docker exec newest-cni-387337 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:56:17.785391  278643 oci.go:144] the created container "newest-cni-387337" has a running status.
	I1206 09:56:17.785426  278643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa...
	I1206 09:56:17.929044  278643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:56:17.954229  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:17.978708  278643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:56:17.978729  278643 kic_runner.go:114] Args: [docker exec --privileged newest-cni-387337 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:56:18.030854  278643 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 09:56:18.061034  278643 machine.go:94] provisionDockerMachine start ...
	I1206 09:56:18.061129  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:18.105041  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:18.105395  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:18.105412  278643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:56:18.106117  278643 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39934->127.0.0.1:33093: read: connection reset by peer
	I1206 09:56:21.259644  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.259668  278643 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 09:56:21.259730  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.280819  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.281151  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.281167  278643 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 09:56:21.446750  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 09:56:21.446840  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:21.466708  278643 main.go:143] libmachine: Using SSH client type: native
	I1206 09:56:21.467034  278643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 09:56:21.467060  278643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:56:21.636152  278643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:56:21.636184  278643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 09:56:21.636206  278643 ubuntu.go:190] setting up certificates
	I1206 09:56:21.636216  278643 provision.go:84] configureAuth start
	I1206 09:56:21.636276  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:21.657085  278643 provision.go:143] copyHostCerts
	I1206 09:56:21.657167  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 09:56:21.657182  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 09:56:21.657287  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 09:56:21.657399  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 09:56:21.657409  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 09:56:21.657439  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 09:56:21.657519  278643 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 09:56:21.657530  278643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 09:56:21.657556  278643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 09:56:21.657626  278643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 09:56:22.235324  278643 provision.go:177] copyRemoteCerts
	I1206 09:56:22.235498  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:56:22.235563  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.254382  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.371978  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:56:22.391750  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:56:22.409835  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 09:56:22.427840  278643 provision.go:87] duration metric: took 791.601956ms to configureAuth
	I1206 09:56:22.427871  278643 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:56:22.428075  278643 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:56:22.428086  278643 machine.go:97] duration metric: took 4.367032221s to provisionDockerMachine
	I1206 09:56:22.428093  278643 client.go:176] duration metric: took 9.792082753s to LocalClient.Create
	I1206 09:56:22.428116  278643 start.go:167] duration metric: took 9.792160612s to libmachine.API.Create "newest-cni-387337"
	I1206 09:56:22.428128  278643 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 09:56:22.428139  278643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:56:22.428194  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:56:22.428238  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.445246  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.552047  278643 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:56:22.555602  278643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:56:22.555631  278643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:56:22.555643  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 09:56:22.555699  278643 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 09:56:22.555780  278643 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 09:56:22.555887  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:56:22.563581  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:22.582270  278643 start.go:296] duration metric: took 154.127995ms for postStartSetup
	I1206 09:56:22.582688  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.600191  278643 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 09:56:22.600480  278643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:56:22.600532  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.618476  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.721461  278643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:56:22.732754  278643 start.go:128] duration metric: took 10.100506966s to createHost
	I1206 09:56:22.732791  278643 start.go:83] releasing machines lock for "newest-cni-387337", held for 10.100657655s
	I1206 09:56:22.732898  278643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 09:56:22.752253  278643 ssh_runner.go:195] Run: cat /version.json
	I1206 09:56:22.752314  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.752332  278643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:56:22.752395  278643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 09:56:22.774900  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.786887  278643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 09:56:22.981230  278643 ssh_runner.go:195] Run: systemctl --version
	I1206 09:56:22.988594  278643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:56:22.993872  278643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:56:22.993970  278643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:56:23.036477  278643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 09:56:23.036554  278643 start.go:496] detecting cgroup driver to use...
	I1206 09:56:23.036604  278643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 09:56:23.036691  278643 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:56:23.053535  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:56:23.068263  278643 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:56:23.068359  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:56:23.086894  278643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:56:23.106796  278643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:56:23.229113  278643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:56:23.353681  278643 docker.go:234] disabling docker service ...
	I1206 09:56:23.353777  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:56:23.376315  278643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:56:23.389550  278643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:56:23.511242  278643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:56:23.632737  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:56:23.646096  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:56:23.661684  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:56:23.671182  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:56:23.680434  278643 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:56:23.680559  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:56:23.689627  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.698546  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:56:23.707890  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:56:23.719929  278643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:56:23.733633  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:56:23.743339  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:56:23.753107  278643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:56:23.763042  278643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:56:23.772383  278643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:56:23.783215  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:23.897379  278643 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:56:24.034106  278643 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:56:24.034227  278643 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:56:24.038555  278643 start.go:564] Will wait 60s for crictl version
	I1206 09:56:24.038667  278643 ssh_runner.go:195] Run: which crictl
	I1206 09:56:24.042893  278643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:56:24.073212  278643 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 09:56:24.073340  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.100352  278643 ssh_runner.go:195] Run: containerd --version
	I1206 09:56:24.125479  278643 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 09:56:24.128585  278643 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:56:24.145134  278643 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 09:56:24.149083  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.161762  278643 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 09:56:24.164661  278643 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:56:24.164804  278643 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 09:56:24.164892  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.190128  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.190154  278643 containerd.go:534] Images already preloaded, skipping extraction
	I1206 09:56:24.190214  278643 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:56:24.214192  278643 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:56:24.214220  278643 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:56:24.214229  278643 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 09:56:24.214329  278643 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:56:24.214400  278643 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:56:24.241654  278643 cni.go:84] Creating CNI manager for ""
	I1206 09:56:24.241679  278643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 09:56:24.241702  278643 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 09:56:24.241726  278643 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:56:24.241847  278643 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:56:24.241920  278643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:56:24.250168  278643 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:56:24.250236  278643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:56:24.259935  278643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 09:56:24.273892  278643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:56:24.288011  278643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 09:56:24.300649  278643 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:56:24.304319  278643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:56:24.314437  278643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:56:24.421252  278643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:56:24.437400  278643 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 09:56:24.437465  278643 certs.go:195] generating shared ca certs ...
	I1206 09:56:24.437496  278643 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.437676  278643 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 09:56:24.437744  278643 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 09:56:24.437767  278643 certs.go:257] generating profile certs ...
	I1206 09:56:24.437853  278643 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 09:56:24.437892  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt with IP's: []
	I1206 09:56:24.906874  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt ...
	I1206 09:56:24.906907  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.crt: {Name:mk3786951ca6b934a39ce0b897be0476ac498386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907112  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key ...
	I1206 09:56:24.907126  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key: {Name:mk400b28e78f0247222772118d8e6e5e81e847c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:24.907230  278643 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 09:56:24.907249  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 09:56:25.112458  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd ...
	I1206 09:56:25.112494  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd: {Name:mk0b66241f430a839566e8733856f4f7778dd203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.112675  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd ...
	I1206 09:56:25.113433  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd: {Name:mk545f1d084e139bf8c177372caec577367d5287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.113573  278643 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt
	I1206 09:56:25.113667  278643 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key
	I1206 09:56:25.113729  278643 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 09:56:25.113755  278643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt with IP's: []
	I1206 09:56:25.390925  278643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt ...
	I1206 09:56:25.390958  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt: {Name:mkf533c4c7795dfadd5e4919382846ec6f68f803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391162  278643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key ...
	I1206 09:56:25.391180  278643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key: {Name:mk080dba3e2186a2cc27fdce20eb9b0d79705a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:56:25.391368  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 09:56:25.391429  278643 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 09:56:25.391438  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:56:25.391466  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 09:56:25.391497  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:56:25.391527  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 09:56:25.391576  278643 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 09:56:25.392167  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:56:25.411617  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 09:56:25.431093  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:56:25.449697  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:56:25.468550  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:56:25.487105  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:56:25.505768  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:56:25.525108  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:56:25.543500  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:56:25.562465  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 09:56:25.580776  278643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 09:56:25.598408  278643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:56:25.612387  278643 ssh_runner.go:195] Run: openssl version
	I1206 09:56:25.618822  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.626357  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:56:25.633933  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637838  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.637908  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:56:25.679350  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:56:25.686883  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:56:25.694288  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.701929  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 09:56:25.709757  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.713960  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.714081  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 09:56:25.755271  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:56:25.762807  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 09:56:25.770247  278643 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.777748  278643 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 09:56:25.785439  278643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789191  278643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.789278  278643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 09:56:25.830268  278643 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.837948  278643 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:56:25.845509  278643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:56:25.849323  278643 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:56:25.849426  278643 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:56:25.849528  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:56:25.849591  278643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:56:25.875432  278643 cri.go:89] found id: ""
	I1206 09:56:25.875532  278643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:56:25.883715  278643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:56:25.891695  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:25.891813  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:25.899921  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:25.899941  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:25.900032  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:25.908195  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:25.908312  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:25.916060  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:25.924068  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:25.924164  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:25.931858  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.939818  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:25.939921  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:25.948338  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:25.956152  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:25.956247  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:25.963660  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:26.031399  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:26.031465  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:26.131684  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:26.131760  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:26.131802  278643 kubeadm.go:319] OS: Linux
	I1206 09:56:26.131854  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:26.131907  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:26.131958  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:26.132009  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:26.132062  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:26.132114  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:26.132163  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:26.132215  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:26.132262  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:26.203361  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:26.203507  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:26.203605  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:26.210048  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:26.216410  278643 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:26.216591  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:26.216714  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:26.398131  278643 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:26.614015  278643 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:27.159843  278643 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:27.364968  278643 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:27.669555  278643 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:27.669750  278643 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.021664  278643 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:28.022031  278643 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 09:56:28.806854  278643 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:29.101949  278643 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:29.804533  278643 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:29.804903  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:30.341296  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:30.816858  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:30.960618  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:31.211332  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:31.505498  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:31.506301  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:31.509226  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:31.513473  278643 out.go:252]   - Booting up control plane ...
	I1206 09:56:31.513588  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:31.513674  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:31.513746  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:31.531878  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:31.532005  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:31.540494  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:31.540946  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:31.541222  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:31.688292  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:31.688412  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:56:52.131955  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 09:56:52.131990  265222 kubeadm.go:319] 
	I1206 09:56:52.132057  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 09:56:52.135086  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 09:56:52.135149  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:56:52.135269  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:56:52.135335  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 09:56:52.135398  265222 kubeadm.go:319] OS: Linux
	I1206 09:56:52.135462  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:56:52.135528  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 09:56:52.135580  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:56:52.135635  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:56:52.135687  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:56:52.135753  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:56:52.135820  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:56:52.135888  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:56:52.135938  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 09:56:52.136021  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:56:52.136130  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:56:52.136253  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:56:52.136339  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:56:52.141711  265222 out.go:252]   - Generating certificates and keys ...
	I1206 09:56:52.141840  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:56:52.141916  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:56:52.141987  265222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:56:52.142053  265222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:56:52.142117  265222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:56:52.142167  265222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:56:52.142231  265222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:56:52.142358  265222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142411  265222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:56:52.142534  265222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 09:56:52.142602  265222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:56:52.142665  265222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:56:52.142714  265222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:56:52.142774  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:56:52.142827  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:56:52.142886  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:56:52.142942  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:56:52.143007  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:56:52.143067  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:56:52.143146  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:56:52.143212  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:56:52.146063  265222 out.go:252]   - Booting up control plane ...
	I1206 09:56:52.146187  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:56:52.146272  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:56:52.146343  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:56:52.146451  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:56:52.146548  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:56:52.146656  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:56:52.146744  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:56:52.146786  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:56:52.146923  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:56:52.147038  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:56:52.147107  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000059514s
	I1206 09:56:52.147115  265222 kubeadm.go:319] 
	I1206 09:56:52.147172  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 09:56:52.147209  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 09:56:52.147316  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 09:56:52.147324  265222 kubeadm.go:319] 
	I1206 09:56:52.147528  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 09:56:52.147567  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 09:56:52.147602  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1206 09:56:52.147720  265222 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-257359] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000059514s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 09:56:52.147812  265222 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 09:56:52.147999  265222 kubeadm.go:319] 
	I1206 09:56:52.558950  265222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:56:52.574085  265222 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:56:52.574157  265222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:56:52.583215  265222 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:56:52.583237  265222 kubeadm.go:158] found existing configuration files:
	
	I1206 09:56:52.583290  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:56:52.592240  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:56:52.592330  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:56:52.601081  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:56:52.609915  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:56:52.609987  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:56:52.618677  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.627409  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:56:52.627476  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:56:52.635636  265222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:56:52.644224  265222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:56:52.644339  265222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:56:52.652667  265222 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:56:52.772328  265222 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 09:56:52.772790  265222 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 09:56:52.844974  265222 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:31.687178  278643 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000277363s
	I1206 10:00:31.687206  278643 kubeadm.go:319] 
	I1206 10:00:31.687552  278643 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:31.687635  278643 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:31.687823  278643 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:31.687982  278643 kubeadm.go:319] 
	I1206 10:00:31.688177  278643 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:31.688245  278643 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:31.688306  278643 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:31.688315  278643 kubeadm.go:319] 
	I1206 10:00:31.693600  278643 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:00:31.694063  278643 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:00:31.694183  278643 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:00:31.694443  278643 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:31.694449  278643 kubeadm.go:319] 
	I1206 10:00:31.694518  278643 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:31.694644  278643 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-387337] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277363s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:00:31.694791  278643 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:00:32.113107  278643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:00:32.127511  278643 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:00:32.127586  278643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:00:32.136075  278643 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:00:32.136153  278643 kubeadm.go:158] found existing configuration files:
	
	I1206 10:00:32.136236  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:00:32.144649  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:00:32.144725  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:00:32.152703  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:00:32.160876  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:00:32.160972  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:00:32.168760  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.176761  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:00:32.176847  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:00:32.184483  278643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:00:32.192491  278643 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:00:32.192587  278643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:00:32.200531  278643 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:00:32.244871  278643 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:32.244955  278643 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:32.319347  278643 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:32.319474  278643 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:32.319533  278643 kubeadm.go:319] OS: Linux
	I1206 10:00:32.319600  278643 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:32.319668  278643 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:32.319735  278643 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:32.319804  278643 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:32.319871  278643 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:32.319938  278643 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:32.320001  278643 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:32.320072  278643 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:32.320138  278643 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:32.391588  278643 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:32.391743  278643 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:32.391866  278643 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:32.399952  278643 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:32.402951  278643 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:32.403119  278643 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:32.403230  278643 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:32.403363  278643 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:32.403527  278643 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:32.403636  278643 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:32.403722  278643 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:32.403826  278643 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:32.403924  278643 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:32.404038  278643 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:32.404152  278643 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:32.404224  278643 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:32.404311  278643 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:32.618555  278643 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:32.763900  278643 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:33.042172  278643 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:33.120040  278643 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:33.316584  278643 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:33.317337  278643 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:33.321890  278643 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:33.325150  278643 out.go:252]   - Booting up control plane ...
	I1206 10:00:33.325263  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:33.325348  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:33.327141  278643 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:33.349232  278643 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:33.349348  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:33.357825  278643 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:33.358422  278643 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:33.358496  278643 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:33.491546  278643 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:33.491691  278643 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:53.947913  265222 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:00:53.947946  265222 kubeadm.go:319] 
	I1206 10:00:53.948017  265222 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:00:53.951147  265222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:00:53.951214  265222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:00:53.951335  265222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:00:53.951432  265222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:00:53.951494  265222 kubeadm.go:319] OS: Linux
	I1206 10:00:53.951590  265222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:00:53.951656  265222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:00:53.951713  265222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:00:53.951772  265222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:00:53.951824  265222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:00:53.951883  265222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:00:53.951934  265222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:00:53.952003  265222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:00:53.952063  265222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:00:53.952142  265222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:00:53.952242  265222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:00:53.952340  265222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:00:53.952409  265222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:00:53.955466  265222 out.go:252]   - Generating certificates and keys ...
	I1206 10:00:53.955575  265222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:00:53.955645  265222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:00:53.955732  265222 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:00:53.955795  265222 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:00:53.955896  265222 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:00:53.955964  265222 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:00:53.956029  265222 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:00:53.956091  265222 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:00:53.956171  265222 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:00:53.956285  265222 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:00:53.956334  265222 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:00:53.956402  265222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:00:53.956467  265222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:00:53.956544  265222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:00:53.956617  265222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:00:53.956688  265222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:00:53.956748  265222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:00:53.956848  265222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:00:53.956936  265222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:00:53.961787  265222 out.go:252]   - Booting up control plane ...
	I1206 10:00:53.961906  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:00:53.961995  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:00:53.962068  265222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:00:53.962176  265222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:00:53.962277  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:00:53.962386  265222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:00:53.962474  265222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:00:53.962516  265222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:00:53.962650  265222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:00:53.962758  265222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:00:53.962827  265222 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001252323s
	I1206 10:00:53.962835  265222 kubeadm.go:319] 
	I1206 10:00:53.962892  265222 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:00:53.962934  265222 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:00:53.963049  265222 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:00:53.963060  265222 kubeadm.go:319] 
	I1206 10:00:53.963164  265222 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:00:53.963200  265222 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:00:53.963233  265222 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:00:53.963298  265222 kubeadm.go:403] duration metric: took 8m6.382652277s to StartCluster
	I1206 10:00:53.963352  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:00:53.963521  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:00:53.963616  265222 kubeadm.go:319] 
	I1206 10:00:53.989223  265222 cri.go:89] found id: ""
	I1206 10:00:53.989249  265222 logs.go:282] 0 containers: []
	W1206 10:00:53.989258  265222 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:00:53.989265  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:00:53.989329  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:00:54.028962  265222 cri.go:89] found id: ""
	I1206 10:00:54.029000  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.029010  265222 logs.go:284] No container was found matching "etcd"
	I1206 10:00:54.029026  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:00:54.029137  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:00:54.055727  265222 cri.go:89] found id: ""
	I1206 10:00:54.055751  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.055760  265222 logs.go:284] No container was found matching "coredns"
	I1206 10:00:54.055766  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:00:54.055826  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:00:54.086040  265222 cri.go:89] found id: ""
	I1206 10:00:54.086066  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.086080  265222 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:00:54.086088  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:00:54.086232  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:00:54.112094  265222 cri.go:89] found id: ""
	I1206 10:00:54.112119  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.112127  265222 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:00:54.112134  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:00:54.112192  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:00:54.141768  265222 cri.go:89] found id: ""
	I1206 10:00:54.141793  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.141802  265222 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:00:54.141808  265222 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:00:54.141867  265222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:00:54.168313  265222 cri.go:89] found id: ""
	I1206 10:00:54.168338  265222 logs.go:282] 0 containers: []
	W1206 10:00:54.168347  265222 logs.go:284] No container was found matching "kindnet"
	I1206 10:00:54.168357  265222 logs.go:123] Gathering logs for kubelet ...
	I1206 10:00:54.168368  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:00:54.224543  265222 logs.go:123] Gathering logs for dmesg ...
	I1206 10:00:54.224578  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:00:54.238829  265222 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:00:54.238859  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:00:54.301151  265222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:00:54.292707    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.293236    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.294877    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.295455    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:00:54.297128    5417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:00:54.301174  265222 logs.go:123] Gathering logs for containerd ...
	I1206 10:00:54.301185  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:00:54.345045  265222 logs.go:123] Gathering logs for container status ...
	I1206 10:00:54.345077  265222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:00:54.376879  265222 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:00:54.376928  265222 out.go:285] * 
	W1206 10:00:54.376993  265222 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.377007  265222 out.go:285] * 
	W1206 10:00:54.379146  265222 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:00:54.386374  265222 out.go:203] 
	W1206 10:00:54.389309  265222 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001252323s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:00:54.389364  265222 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:00:54.389414  265222 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:00:54.392565  265222 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 09:52:38 no-preload-257359 containerd[759]: time="2025-12-06T09:52:38.191823536Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.275178603Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.277620284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.285785629Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:39 no-preload-257359 containerd[759]: time="2025-12-06T09:52:39.304007334Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.343348725Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.345594679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.355341259Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:40 no-preload-257359 containerd[759]: time="2025-12-06T09:52:40.356240418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.412021767Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.415665946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.424382780Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:41 no-preload-257359 containerd[759]: time="2025-12-06T09:52:41.425219514Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.916947622Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.919648694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.927607211Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:42 no-preload-257359 containerd[759]: time="2025-12-06T09:52:42.928479671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.049537462Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.052567103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.061652310Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.067188454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.436051059Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.438287839Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.445166461Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:52:44 no-preload-257359 containerd[759]: time="2025-12-06T09:52:44.446411675Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:02:48.534173    6872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:02:48.537888    6872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:02:48.539610    6872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:02:48.540185    6872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:02:48.541839    6872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:02:48 up  1:45,  0 user,  load average: 0.10, 1.05, 1.91
	Linux no-preload-257359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:02:45 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:02:46 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 469.
	Dec 06 10:02:46 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:02:46 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:02:46 no-preload-257359 kubelet[6756]: E1206 10:02:46.289358    6756 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:02:46 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:02:46 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:02:46 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 470.
	Dec 06 10:02:46 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:02:46 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:02:47 no-preload-257359 kubelet[6762]: E1206 10:02:47.032920    6762 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:02:47 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:02:47 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:02:47 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 471.
	Dec 06 10:02:47 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:02:47 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:02:47 no-preload-257359 kubelet[6775]: E1206 10:02:47.791276    6775 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:02:47 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:02:47 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:02:48 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 472.
	Dec 06 10:02:48 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:02:48 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:02:48 no-preload-257359 kubelet[6876]: E1206 10:02:48.577561    6876 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:02:48 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:02:48 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359: exit status 6 (326.227004ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:02:48.999312  287656 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-257359" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (109.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 10:02:57.332033    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.226667441s)

                                                
                                                
-- stdout --
	* [no-preload-257359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-257359" primary control-plane node in "no-preload-257359" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:02:50.560309  287962 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:02:50.560438  287962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:02:50.560447  287962 out.go:374] Setting ErrFile to fd 2...
	I1206 10:02:50.560453  287962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:02:50.560700  287962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:02:50.561041  287962 out.go:368] Setting JSON to false
	I1206 10:02:50.561931  287962 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6322,"bootTime":1765009049,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:02:50.561998  287962 start.go:143] virtualization:  
	I1206 10:02:50.565075  287962 out.go:179] * [no-preload-257359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:02:50.569157  287962 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:02:50.569230  287962 notify.go:221] Checking for updates...
	I1206 10:02:50.575040  287962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:02:50.578100  287962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:02:50.581099  287962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:02:50.584049  287962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:02:50.587045  287962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:02:50.590515  287962 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:02:50.591076  287962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:02:50.613858  287962 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:02:50.613996  287962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:02:50.681770  287962 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:02:50.672313547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:02:50.681877  287962 docker.go:319] overlay module found
	I1206 10:02:50.685299  287962 out.go:179] * Using the docker driver based on existing profile
	I1206 10:02:50.688097  287962 start.go:309] selected driver: docker
	I1206 10:02:50.688133  287962 start.go:927] validating driver "docker" against &{Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:02:50.688234  287962 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:02:50.688955  287962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:02:50.763306  287962 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:02:50.754198972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:02:50.763670  287962 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:02:50.763694  287962 cni.go:84] Creating CNI manager for ""
	I1206 10:02:50.763755  287962 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:02:50.763787  287962 start.go:353] cluster config:
	{Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:02:50.767045  287962 out.go:179] * Starting "no-preload-257359" primary control-plane node in "no-preload-257359" cluster
	I1206 10:02:50.769839  287962 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:02:50.772658  287962 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:02:50.775524  287962 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:02:50.775664  287962 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/config.json ...
	I1206 10:02:50.776024  287962 cache.go:107] acquiring lock: {Name:mkad35cce177b57f018574c39ee8c3c239eb9b07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776116  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 10:02:50.776125  287962 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.204µs
	I1206 10:02:50.776138  287962 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 10:02:50.776152  287962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:02:50.776297  287962 cache.go:107] acquiring lock: {Name:mk5bfca67d26458a19d81fb604def77746df1eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776349  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 10:02:50.776357  287962 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 64.616µs
	I1206 10:02:50.776363  287962 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776373  287962 cache.go:107] acquiring lock: {Name:mk51ddffc8cf367c8f9ab9dab46cca9425ce4f0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776404  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 10:02:50.776409  287962 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.794µs
	I1206 10:02:50.776415  287962 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776424  287962 cache.go:107] acquiring lock: {Name:mkdb80297b5c34ff2c59c7d0547bc50e4c902573 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776457  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 10:02:50.776467  287962 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.57µs
	I1206 10:02:50.776475  287962 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776497  287962 cache.go:107] acquiring lock: {Name:mk507200c1f46ea68c0c2896fa231924d660663f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776525  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 10:02:50.776530  287962 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.002µs
	I1206 10:02:50.776536  287962 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776545  287962 cache.go:107] acquiring lock: {Name:mkf308199b47415a211213857d6d1bca152d3eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776571  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 10:02:50.776576  287962 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 31.213µs
	I1206 10:02:50.776581  287962 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 10:02:50.776589  287962 cache.go:107] acquiring lock: {Name:mk5d1295ea377d97f7962ba416aea9d5b2908db5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776615  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1206 10:02:50.776620  287962 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.77µs
	I1206 10:02:50.776625  287962 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 10:02:50.776635  287962 cache.go:107] acquiring lock: {Name:mk2939303cfab712d7c12da37ef89ab2271b37f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776664  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 10:02:50.776668  287962 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 34.815µs
	I1206 10:02:50.776674  287962 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 10:02:50.776680  287962 cache.go:87] Successfully saved all images to host disk.
	I1206 10:02:50.798946  287962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:02:50.798971  287962 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:02:50.798991  287962 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:02:50.799021  287962 start.go:360] acquireMachinesLock for no-preload-257359: {Name:mk6d92dd7ed626ac67dff0eb9c6415617a7c299c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.799098  287962 start.go:364] duration metric: took 57.026µs to acquireMachinesLock for "no-preload-257359"
	I1206 10:02:50.799124  287962 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:02:50.799130  287962 fix.go:54] fixHost starting: 
	I1206 10:02:50.799434  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:50.817117  287962 fix.go:112] recreateIfNeeded on no-preload-257359: state=Stopped err=<nil>
	W1206 10:02:50.817159  287962 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:02:50.820604  287962 out.go:252] * Restarting existing docker container for "no-preload-257359" ...
	I1206 10:02:50.820691  287962 cli_runner.go:164] Run: docker start no-preload-257359
	I1206 10:02:51.082081  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:51.109151  287962 kic.go:430] container "no-preload-257359" state is running.
	I1206 10:02:51.111028  287962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 10:02:51.134579  287962 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/config.json ...
	I1206 10:02:51.135073  287962 machine.go:94] provisionDockerMachine start ...
	I1206 10:02:51.135154  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:51.160524  287962 main.go:143] libmachine: Using SSH client type: native
	I1206 10:02:51.161106  287962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 10:02:51.161128  287962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:02:51.161871  287962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:02:54.315394  287962 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-257359
	
	I1206 10:02:54.315419  287962 ubuntu.go:182] provisioning hostname "no-preload-257359"
	I1206 10:02:54.315482  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:54.335607  287962 main.go:143] libmachine: Using SSH client type: native
	I1206 10:02:54.335937  287962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 10:02:54.335955  287962 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-257359 && echo "no-preload-257359" | sudo tee /etc/hostname
	I1206 10:02:54.504049  287962 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-257359
	
	I1206 10:02:54.504125  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:54.526012  287962 main.go:143] libmachine: Using SSH client type: native
	I1206 10:02:54.526337  287962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 10:02:54.526359  287962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-257359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-257359/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-257359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:02:54.679778  287962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:02:54.679871  287962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:02:54.679899  287962 ubuntu.go:190] setting up certificates
	I1206 10:02:54.679930  287962 provision.go:84] configureAuth start
	I1206 10:02:54.680010  287962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 10:02:54.697376  287962 provision.go:143] copyHostCerts
	I1206 10:02:54.697458  287962 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:02:54.697469  287962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:02:54.697553  287962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:02:54.697662  287962 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:02:54.697668  287962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:02:54.697694  287962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:02:54.697758  287962 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:02:54.697763  287962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:02:54.697787  287962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:02:54.697840  287962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.no-preload-257359 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-257359]
	I1206 10:02:54.977047  287962 provision.go:177] copyRemoteCerts
	I1206 10:02:54.977148  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:02:54.977221  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:54.995583  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.103869  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:02:55.123476  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:02:55.143183  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:02:55.162544  287962 provision.go:87] duration metric: took 482.585221ms to configureAuth
	I1206 10:02:55.162615  287962 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:02:55.162829  287962 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:02:55.162844  287962 machine.go:97] duration metric: took 4.027747325s to provisionDockerMachine
	I1206 10:02:55.162853  287962 start.go:293] postStartSetup for "no-preload-257359" (driver="docker")
	I1206 10:02:55.162865  287962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:02:55.162921  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:02:55.162965  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.180527  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.287583  287962 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:02:55.291124  287962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:02:55.291151  287962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:02:55.291168  287962 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:02:55.291224  287962 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:02:55.291309  287962 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:02:55.291497  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:02:55.299238  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:02:55.317641  287962 start.go:296] duration metric: took 154.772967ms for postStartSetup
	I1206 10:02:55.317745  287962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:02:55.317837  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.335751  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.440465  287962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:02:55.445127  287962 fix.go:56] duration metric: took 4.645989389s for fixHost
	I1206 10:02:55.445154  287962 start.go:83] releasing machines lock for "no-preload-257359", held for 4.646041311s
	I1206 10:02:55.445251  287962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 10:02:55.462635  287962 ssh_runner.go:195] Run: cat /version.json
	I1206 10:02:55.462693  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.462962  287962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:02:55.463017  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.487975  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.493550  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.591110  287962 ssh_runner.go:195] Run: systemctl --version
	I1206 10:02:55.687501  287962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:02:55.693096  287962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:02:55.693233  287962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:02:55.701547  287962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:02:55.701573  287962 start.go:496] detecting cgroup driver to use...
	I1206 10:02:55.701604  287962 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:02:55.701653  287962 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:02:55.719594  287962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:02:55.734226  287962 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:02:55.734290  287962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:02:55.750404  287962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:02:55.764033  287962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:02:55.874437  287962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:02:56.003896  287962 docker.go:234] disabling docker service ...
	I1206 10:02:56.004020  287962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:02:56.022407  287962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:02:56.039132  287962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:02:56.150673  287962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:02:56.279968  287962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:02:56.293559  287962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:02:56.309015  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:02:56.320264  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:02:56.329394  287962 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:02:56.329501  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:02:56.338337  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:02:56.348542  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:02:56.357278  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:02:56.366102  287962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:02:56.374530  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:02:56.383495  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:02:56.392560  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:02:56.401292  287962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:02:56.408750  287962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:02:56.416046  287962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:02:56.521476  287962 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:02:56.624710  287962 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:02:56.624790  287962 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:02:56.628711  287962 start.go:564] Will wait 60s for crictl version
	I1206 10:02:56.628775  287962 ssh_runner.go:195] Run: which crictl
	I1206 10:02:56.632374  287962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:02:56.660663  287962 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:02:56.660734  287962 ssh_runner.go:195] Run: containerd --version
	I1206 10:02:56.680803  287962 ssh_runner.go:195] Run: containerd --version
	I1206 10:02:56.706136  287962 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:02:56.708890  287962 cli_runner.go:164] Run: docker network inspect no-preload-257359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:02:56.729633  287962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 10:02:56.733998  287962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:02:56.743903  287962 kubeadm.go:884] updating cluster {Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:02:56.744025  287962 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:02:56.744079  287962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:02:56.773425  287962 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:02:56.773444  287962 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:02:56.773451  287962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 10:02:56.773547  287962 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-257359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:02:56.773604  287962 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:02:56.801911  287962 cni.go:84] Creating CNI manager for ""
	I1206 10:02:56.801937  287962 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:02:56.801959  287962 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:02:56.801983  287962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-257359 NodeName:no-preload-257359 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:02:56.802107  287962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-257359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:02:56.802181  287962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:02:56.810040  287962 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:02:56.810160  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:02:56.817847  287962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:02:56.834027  287962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:02:56.847083  287962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 10:02:56.859664  287962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:02:56.863520  287962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:02:56.873266  287962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:02:56.982686  287962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:02:57.002169  287962 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359 for IP: 192.168.76.2
	I1206 10:02:57.002242  287962 certs.go:195] generating shared ca certs ...
	I1206 10:02:57.002272  287962 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.002542  287962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:02:57.002639  287962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:02:57.002674  287962 certs.go:257] generating profile certs ...
	I1206 10:02:57.002879  287962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/client.key
	I1206 10:02:57.003008  287962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key.673fc286
	I1206 10:02:57.003090  287962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key
	I1206 10:02:57.003263  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:02:57.003330  287962 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:02:57.003355  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:02:57.003487  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:02:57.003549  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:02:57.003611  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:02:57.003709  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:02:57.004746  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:02:57.030862  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:02:57.051127  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:02:57.070625  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:02:57.091646  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:02:57.109996  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:02:57.128427  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:02:57.146680  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:02:57.165617  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:02:57.183550  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:02:57.201664  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:02:57.220303  287962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:02:57.233337  287962 ssh_runner.go:195] Run: openssl version
	I1206 10:02:57.240029  287962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.247873  287962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:02:57.255843  287962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.259576  287962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.259660  287962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.301069  287962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:02:57.308859  287962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.316603  287962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:02:57.324324  287962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.328364  287962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.328429  287962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.371448  287962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:02:57.379279  287962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.386821  287962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:02:57.394739  287962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.398636  287962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.398746  287962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.439669  287962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:02:57.447527  287962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:02:57.451414  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:02:57.495635  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:02:57.538757  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:02:57.580199  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:02:57.621554  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:02:57.663093  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:02:57.704506  287962 kubeadm.go:401] StartCluster: {Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:02:57.704612  287962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:02:57.704683  287962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:02:57.737766  287962 cri.go:89] found id: ""
	I1206 10:02:57.737905  287962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:02:57.747113  287962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:02:57.747187  287962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:02:57.747271  287962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:02:57.755581  287962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:02:57.756044  287962 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:02:57.756207  287962 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-257359" cluster setting kubeconfig missing "no-preload-257359" context setting]
	I1206 10:02:57.756524  287962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.758045  287962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:02:57.767197  287962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1206 10:02:57.767270  287962 kubeadm.go:602] duration metric: took 20.064098ms to restartPrimaryControlPlane
	I1206 10:02:57.767298  287962 kubeadm.go:403] duration metric: took 62.801543ms to StartCluster
	I1206 10:02:57.767343  287962 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.767500  287962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:02:57.768125  287962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.768380  287962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:02:57.768778  287962 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:02:57.768818  287962 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:02:57.768907  287962 addons.go:70] Setting storage-provisioner=true in profile "no-preload-257359"
	I1206 10:02:57.768922  287962 addons.go:239] Setting addon storage-provisioner=true in "no-preload-257359"
	I1206 10:02:57.768948  287962 host.go:66] Checking if "no-preload-257359" exists ...
	I1206 10:02:57.769092  287962 addons.go:70] Setting dashboard=true in profile "no-preload-257359"
	I1206 10:02:57.769107  287962 addons.go:239] Setting addon dashboard=true in "no-preload-257359"
	W1206 10:02:57.769113  287962 addons.go:248] addon dashboard should already be in state true
	I1206 10:02:57.769132  287962 host.go:66] Checking if "no-preload-257359" exists ...
	I1206 10:02:57.769421  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.769598  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.771431  287962 addons.go:70] Setting default-storageclass=true in profile "no-preload-257359"
	I1206 10:02:57.771472  287962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-257359"
	I1206 10:02:57.771804  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.774271  287962 out.go:179] * Verifying Kubernetes components...
	I1206 10:02:57.777342  287962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:02:57.814184  287962 addons.go:239] Setting addon default-storageclass=true in "no-preload-257359"
	I1206 10:02:57.814227  287962 host.go:66] Checking if "no-preload-257359" exists ...
	I1206 10:02:57.814645  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.822142  287962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 10:02:57.822210  287962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:02:57.824805  287962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:57.824833  287962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:02:57.824900  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:57.829489  287962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 10:02:57.833709  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 10:02:57.833737  287962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 10:02:57.833810  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:57.854012  287962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:02:57.854037  287962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:02:57.854112  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:57.856277  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:57.890754  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:57.895620  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:58.001418  287962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:02:58.013906  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:58.039554  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:02:58.055658  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 10:02:58.055695  287962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 10:02:58.101580  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 10:02:58.101618  287962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 10:02:58.128504  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 10:02:58.128540  287962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 10:02:58.143820  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 10:02:58.143842  287962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 10:02:58.157352  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 10:02:58.157374  287962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 10:02:58.170340  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 10:02:58.170363  287962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 10:02:58.183841  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 10:02:58.183863  287962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 10:02:58.196825  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 10:02:58.196897  287962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 10:02:58.210321  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:02:58.210397  287962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 10:02:58.225210  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:02:58.721996  287962 node_ready.go:35] waiting up to 6m0s for node "no-preload-257359" to be "Ready" ...
	W1206 10:02:58.722385  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.722423  287962 retry.go:31] will retry after 208.185624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:58.722498  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.722524  287962 retry.go:31] will retry after 257.532203ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:58.722744  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.722763  287962 retry.go:31] will retry after 233.335704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.931351  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:58.956947  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:02:58.980534  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:02:59.025353  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.025390  287962 retry.go:31] will retry after 353.673401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:59.100456  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.100492  287962 retry.go:31] will retry after 331.036919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:59.107099  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.107140  287962 retry.go:31] will retry after 441.449257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.379273  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:59.432019  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:02:59.442471  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.442555  287962 retry.go:31] will retry after 796.609581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:59.506117  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.506155  287962 retry.go:31] will retry after 415.679971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.549272  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:02:59.613494  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.613567  287962 retry.go:31] will retry after 772.999564ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.922714  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:02:59.987770  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.987802  287962 retry.go:31] will retry after 559.230816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.240691  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:03:00.387605  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:00.455516  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.455602  287962 retry.go:31] will retry after 1.187622029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:00.463633  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.463667  287962 retry.go:31] will retry after 1.200867497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.547852  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:00.612093  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.612139  287962 retry.go:31] will retry after 893.435078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:00.722580  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:01.505896  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:01.574850  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.574887  287962 retry.go:31] will retry after 1.48070732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.644272  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:03:01.664837  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:01.713457  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.713495  287962 retry.go:31] will retry after 1.793608766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:01.741247  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.741282  287962 retry.go:31] will retry after 1.808351217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:02.723834  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:03.056692  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:03.120499  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.120617  287962 retry.go:31] will retry after 3.123226715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.507497  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:03:03.550077  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:03.603673  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.603716  287962 retry.go:31] will retry after 1.607269464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:03.627477  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.627509  287962 retry.go:31] will retry after 1.427548448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.055613  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:05.122568  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.122601  287962 retry.go:31] will retry after 4.264191427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.212016  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:05.222808  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:05.272035  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.272069  287962 retry.go:31] will retry after 4.227301864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:06.244562  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:06.309955  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:06.310033  287962 retry.go:31] will retry after 4.216626241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:07.223150  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:09.387517  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:09.457868  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:09.457900  287962 retry.go:31] will retry after 2.71431214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:09.499976  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:09.592059  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:09.592097  287962 retry.go:31] will retry after 2.312821913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:09.722871  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:10.527449  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:10.596453  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:10.596493  287962 retry.go:31] will retry after 5.508635395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:11.905982  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:11.973035  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:11.973068  287962 retry.go:31] will retry after 5.314130156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:12.173390  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:12.223182  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:12.232700  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:12.232730  287962 retry.go:31] will retry after 4.087053557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:14.722724  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:16.105932  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:16.170813  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:16.170847  287962 retry.go:31] will retry after 7.046098386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:16.320412  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:16.383512  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:16.383547  287962 retry.go:31] will retry after 7.362220175s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:16.723439  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:17.287932  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:17.349195  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:17.349229  287962 retry.go:31] will retry after 7.285529113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:19.223445  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:21.722607  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:23.217212  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:23.292880  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:23.292916  287962 retry.go:31] will retry after 20.839138696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:23.746772  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:23.837743  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:23.837784  287962 retry.go:31] will retry after 13.347463373s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:24.222666  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:24.635188  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:24.696400  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:24.696432  287962 retry.go:31] will retry after 15.254736641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:26.722523  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:28.722631  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:30.722708  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:33.222657  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:35.222704  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:37.186329  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:37.223320  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:37.292820  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:37.292848  287962 retry.go:31] will retry after 20.057827776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:39.722636  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:39.952067  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:40.017939  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:40.018752  287962 retry.go:31] will retry after 24.548199368s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:42.222608  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:44.132237  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:44.192642  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:44.192676  287962 retry.go:31] will retry after 19.029425314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:44.223357  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:46.722764  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:49.222520  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:51.222722  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:53.722670  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:56.222685  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:57.351089  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:57.410313  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:57.410344  287962 retry.go:31] will retry after 37.517817356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:58.223443  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:00.723057  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:03.222473  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:04:03.222747  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:03.285193  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:04:03.285231  287962 retry.go:31] will retry after 27.356198279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:04:04.567241  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:04:04.627406  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:04:04.627436  287962 retry.go:31] will retry after 26.195836442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:04:05.722912  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:08.222592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:10.223509  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:12.722603  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:15.222600  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:17.222760  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:19.723361  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:22.222650  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:24.222709  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:26.223343  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:28.722606  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:30.642530  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:04:30.708862  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:04:30.708972  287962 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 10:04:30.722850  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:30.824129  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:04:30.888593  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:04:30.888692  287962 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 10:04:32.735617  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:34.928756  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:04:35.053445  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:04:35.053544  287962 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:04:35.056779  287962 out.go:179] * Enabled addons: 
	I1206 10:04:35.059761  287962 addons.go:530] duration metric: took 1m37.290947826s for enable addons: enabled=[]
	W1206 10:04:35.222513  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:37.222570  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:39.722630  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:42.222618  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:44.723604  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:47.222612  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:49.222935  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:51.722968  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:54.222557  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:56.222825  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:58.722671  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:00.723049  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:02.723435  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:05.222812  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:07.722576  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:09.722659  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:11.722705  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:13.723654  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:16.222504  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:18.222715  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:20.722949  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:22.723547  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:25.222533  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:27.222637  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:29.222722  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:31.722629  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:33.722688  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:35.722927  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:37.723623  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:40.223596  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:42.722591  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:45.223512  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:47.722519  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:49.722653  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:52.222599  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:54.722648  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:56.722892  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:58.723336  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:01.222648  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:03.722717  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:05.722812  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:08.222662  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:10.722866  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:13.222574  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:15.223640  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:17.722517  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:19.722607  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:22.222571  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:24.223444  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:26.722852  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:29.222555  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:31.222846  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:33.722513  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:35.723224  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:38.222673  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:40.722829  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:42.723326  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:45.223605  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:47.722614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:50.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:52.722502  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:54.722548  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:56.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:58.723298  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:01.222997  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:03.722642  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:06.222598  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:08.222649  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:10.722891  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:13.222531  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:15.223567  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:17.722636  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:20.222572  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:22.222705  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:24.723752  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:27.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:29.223485  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:31.722556  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:33.722685  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:35.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:38.222743  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:40.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:43.222679  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:45.222775  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:47.727856  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:50.223331  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:52.723301  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:55.222438  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:57.222678  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:59.223226  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:01.722650  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:04.222533  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:06.222644  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:08.722561  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:10.722908  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:13.223465  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:15.723287  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:18.223318  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:20.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:23.222550  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:25.722683  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:27.723593  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:30.222645  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:32.223260  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:34.723506  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:37.222544  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:39.222587  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:41.722543  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:43.723229  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:46.222859  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:48.223148  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:50.223492  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:52.722479  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:54.722588  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:57.222560  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:58.722277  287962 node_ready.go:38] duration metric: took 6m0.000230261s for node "no-preload-257359" to be "Ready" ...
	I1206 10:08:58.725649  287962 out.go:203] 
	W1206 10:08:58.728547  287962 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:08:58.728572  287962 out.go:285] * 
	* 
	W1206 10:08:58.730704  287962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:08:58.733695  287962 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-257359
helpers_test.go:243: (dbg) docker inspect no-preload-257359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	        "Created": "2025-12-06T09:52:27.333376101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:02:50.853067046Z",
	            "FinishedAt": "2025-12-06T10:02:49.497503356Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hostname",
	        "HostsPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hosts",
	        "LogPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26-json.log",
	        "Name": "/no-preload-257359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-257359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-257359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	                "LowerDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-257359",
	                "Source": "/var/lib/docker/volumes/no-preload-257359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-257359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-257359",
	                "name.minikube.sigs.k8s.io": "no-preload-257359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "263a8cb62ad65d73ef315ff544437f3a15543e9da8e511558b3504b20118eae7",
	            "SandboxKey": "/var/run/docker/netns/263a8cb62ad6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-257359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:cd:c5:1d:17:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b05bfbfa55363c82b2c20e75689dc6d905b9177d9ed6efb1bc4c663e65903cf4",
	                    "EndpointID": "fe68f03ea36cc45569898aaadfae8dde5a2342dd57895d5970718f4ce7302e58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-257359",
	                        "76494ba86a40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359: exit status 2 (403.916631ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-257359 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-257359 logs -n 25: (1.168886785s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ embed-certs-100767 image list --format=json                                                                                                                                                                                                                │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-257359 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-257359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-387337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:04 UTC │                     │
	│ stop    │ -p newest-cni-387337 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │ 06 Dec 25 10:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-387337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │ 06 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:06:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:06:25.195145  293728 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:06:25.195325  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195335  293728 out.go:374] Setting ErrFile to fd 2...
	I1206 10:06:25.195341  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195634  293728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:06:25.196028  293728 out.go:368] Setting JSON to false
	I1206 10:06:25.196926  293728 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6537,"bootTime":1765009049,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:06:25.196997  293728 start.go:143] virtualization:  
	I1206 10:06:25.199959  293728 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:06:25.203880  293728 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:06:25.204017  293728 notify.go:221] Checking for updates...
	I1206 10:06:25.210368  293728 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:06:25.213374  293728 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:25.216371  293728 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:06:25.221036  293728 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:06:25.223973  293728 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:06:25.227572  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:25.228243  293728 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:06:25.261513  293728 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:06:25.261626  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.340601  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.331029372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.340708  293728 docker.go:319] overlay module found
	I1206 10:06:25.343872  293728 out.go:179] * Using the docker driver based on existing profile
	I1206 10:06:25.346835  293728 start.go:309] selected driver: docker
	I1206 10:06:25.346867  293728 start.go:927] validating driver "docker" against &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.346969  293728 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:06:25.347911  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.407260  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.398348793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.407652  293728 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 10:06:25.407684  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:25.407750  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:25.407788  293728 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.410983  293728 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 10:06:25.413800  293728 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:06:25.416704  293728 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:06:25.419472  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:25.419517  293728 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:06:25.419530  293728 cache.go:65] Caching tarball of preloaded images
	I1206 10:06:25.419542  293728 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:06:25.419614  293728 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:06:25.419624  293728 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:06:25.419745  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.439065  293728 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:06:25.439097  293728 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:06:25.439117  293728 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:06:25.439151  293728 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:06:25.439218  293728 start.go:364] duration metric: took 44.948µs to acquireMachinesLock for "newest-cni-387337"
	I1206 10:06:25.439242  293728 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:06:25.439250  293728 fix.go:54] fixHost starting: 
	I1206 10:06:25.439553  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.455936  293728 fix.go:112] recreateIfNeeded on newest-cni-387337: state=Stopped err=<nil>
	W1206 10:06:25.455970  293728 fix.go:138] unexpected machine state, will restart: <nil>
	W1206 10:06:22.222571  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:24.223444  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:25.459174  293728 out.go:252] * Restarting existing docker container for "newest-cni-387337" ...
	I1206 10:06:25.459260  293728 cli_runner.go:164] Run: docker start newest-cni-387337
	I1206 10:06:25.713574  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.738668  293728 kic.go:430] container "newest-cni-387337" state is running.
	I1206 10:06:25.739140  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:25.765706  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.766035  293728 machine.go:94] provisionDockerMachine start ...
	I1206 10:06:25.766147  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:25.787280  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:25.787973  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:25.787996  293728 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:06:25.789031  293728 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:06:28.943483  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:28.943510  293728 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 10:06:28.943583  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:28.962379  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:28.962708  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:28.962726  293728 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 10:06:29.136463  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:29.136552  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.155008  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:29.155343  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:29.155363  293728 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:06:29.311555  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:06:29.311646  293728 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:06:29.311703  293728 ubuntu.go:190] setting up certificates
	I1206 10:06:29.311733  293728 provision.go:84] configureAuth start
	I1206 10:06:29.311826  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.328361  293728 provision.go:143] copyHostCerts
	I1206 10:06:29.328435  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:06:29.328455  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:06:29.328532  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:06:29.328644  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:06:29.328655  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:06:29.328683  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:06:29.328754  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:06:29.328763  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:06:29.328788  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:06:29.328850  293728 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 10:06:29.477422  293728 provision.go:177] copyRemoteCerts
	I1206 10:06:29.477497  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:06:29.477551  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.495349  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.603554  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:06:29.622338  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:06:29.641011  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:06:29.660417  293728 provision.go:87] duration metric: took 348.656521ms to configureAuth
	I1206 10:06:29.660488  293728 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:06:29.660700  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:29.660714  293728 machine.go:97] duration metric: took 3.894659315s to provisionDockerMachine
	I1206 10:06:29.660722  293728 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 10:06:29.660734  293728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:06:29.660787  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:06:29.660840  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.679336  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.792654  293728 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:06:29.796414  293728 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:06:29.796451  293728 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:06:29.796481  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:06:29.796555  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:06:29.796637  293728 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:06:29.796752  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:06:29.804466  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:29.822913  293728 start.go:296] duration metric: took 162.176035ms for postStartSetup
	I1206 10:06:29.822993  293728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:06:29.823033  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.841962  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.944706  293728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:06:29.949621  293728 fix.go:56] duration metric: took 4.510364001s for fixHost
	I1206 10:06:29.949690  293728 start.go:83] releasing machines lock for "newest-cni-387337", held for 4.510458303s
	I1206 10:06:29.949801  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.966982  293728 ssh_runner.go:195] Run: cat /version.json
	I1206 10:06:29.967044  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.967315  293728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:06:29.967425  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.989346  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.995399  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:30.108934  293728 ssh_runner.go:195] Run: systemctl --version
	W1206 10:06:26.722852  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:29.222555  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:30.251570  293728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:06:30.256600  293728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:06:30.256686  293728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:06:30.265366  293728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:06:30.265436  293728 start.go:496] detecting cgroup driver to use...
	I1206 10:06:30.265475  293728 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:06:30.265547  293728 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:06:30.285393  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:06:30.300014  293728 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:06:30.300101  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:06:30.316388  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:06:30.330703  293728 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:06:30.447811  293728 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:06:30.578928  293728 docker.go:234] disabling docker service ...
	I1206 10:06:30.579012  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:06:30.595245  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:06:30.608936  293728 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:06:30.732584  293728 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:06:30.854426  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:06:30.867755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:06:30.882294  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:06:30.891997  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:06:30.901695  293728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:06:30.901766  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:06:30.911307  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.920864  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:06:30.930280  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.939955  293728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:06:30.948517  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:06:30.957894  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:06:30.967715  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:06:30.977793  293728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:06:30.985557  293728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:06:30.993239  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.114748  293728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:06:31.239476  293728 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:06:31.239597  293728 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:06:31.244664  293728 start.go:564] Will wait 60s for crictl version
	I1206 10:06:31.244770  293728 ssh_runner.go:195] Run: which crictl
	I1206 10:06:31.249231  293728 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:06:31.276528  293728 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:06:31.276637  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.298790  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.323558  293728 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:06:31.326534  293728 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:06:31.343556  293728 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 10:06:31.347752  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.361512  293728 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 10:06:31.364437  293728 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:06:31.364599  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:31.364692  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.390507  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.390542  293728 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:06:31.390602  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.417903  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.417928  293728 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:06:31.417937  293728 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 10:06:31.418044  293728 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:06:31.418117  293728 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:06:31.443849  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:31.443876  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:31.443900  293728 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 10:06:31.443924  293728 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:06:31.444044  293728 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:06:31.444118  293728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:06:31.452187  293728 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:06:31.452301  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:06:31.460150  293728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:06:31.473854  293728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:06:31.487946  293728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 10:06:31.501615  293728 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:06:31.505530  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.516062  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.633832  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:31.655929  293728 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 10:06:31.655955  293728 certs.go:195] generating shared ca certs ...
	I1206 10:06:31.655972  293728 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:31.656127  293728 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:06:31.656182  293728 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:06:31.656198  293728 certs.go:257] generating profile certs ...
	I1206 10:06:31.656306  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 10:06:31.656372  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 10:06:31.656419  293728 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 10:06:31.656536  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:06:31.656576  293728 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:06:31.656590  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:06:31.656620  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:06:31.656647  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:06:31.656675  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:06:31.656737  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:31.657407  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:06:31.678086  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:06:31.699851  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:06:31.722100  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:06:31.743193  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:06:31.762896  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:06:31.781616  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:06:31.801280  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:06:31.819401  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:06:31.838552  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:06:31.856936  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:06:31.875547  293728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:06:31.888930  293728 ssh_runner.go:195] Run: openssl version
	I1206 10:06:31.895342  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.903529  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:06:31.911304  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915287  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915352  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.961696  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:06:31.970315  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.981710  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:06:31.992227  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996668  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996744  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:06:32.043296  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:06:32.051139  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.058979  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:06:32.066993  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071120  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071217  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.113955  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:06:32.121998  293728 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:06:32.126168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:06:32.167933  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:06:32.209594  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:06:32.252826  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:06:32.295168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:06:32.336384  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:06:32.377923  293728 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:32.378019  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:06:32.378107  293728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:06:32.406152  293728 cri.go:89] found id: ""
	I1206 10:06:32.406224  293728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:06:32.414373  293728 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:06:32.414394  293728 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:06:32.414444  293728 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:06:32.422214  293728 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:06:32.422855  293728 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-387337" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.423179  293728 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-387337" cluster setting kubeconfig missing "newest-cni-387337" context setting]
	I1206 10:06:32.423737  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.425135  293728 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:06:32.433653  293728 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 10:06:32.433689  293728 kubeadm.go:602] duration metric: took 19.289872ms to restartPrimaryControlPlane
	I1206 10:06:32.433699  293728 kubeadm.go:403] duration metric: took 55.791147ms to StartCluster
	I1206 10:06:32.433714  293728 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.433786  293728 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.434769  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.434995  293728 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:06:32.435318  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:32.435370  293728 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:06:32.435471  293728 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-387337"
	I1206 10:06:32.435485  293728 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-387337"
	I1206 10:06:32.435510  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435575  293728 addons.go:70] Setting dashboard=true in profile "newest-cni-387337"
	I1206 10:06:32.435608  293728 addons.go:239] Setting addon dashboard=true in "newest-cni-387337"
	W1206 10:06:32.435630  293728 addons.go:248] addon dashboard should already be in state true
	I1206 10:06:32.435689  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435986  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436310  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436715  293728 addons.go:70] Setting default-storageclass=true in profile "newest-cni-387337"
	I1206 10:06:32.436742  293728 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-387337"
	I1206 10:06:32.437054  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.440794  293728 out.go:179] * Verifying Kubernetes components...
	I1206 10:06:32.443631  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:32.498221  293728 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 10:06:32.501060  293728 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 10:06:32.503631  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 10:06:32.503654  293728 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 10:06:32.503744  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.508648  293728 addons.go:239] Setting addon default-storageclass=true in "newest-cni-387337"
	I1206 10:06:32.508690  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.509493  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.523049  293728 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:06:32.526921  293728 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.526947  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:06:32.527022  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.570818  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.571691  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.595638  293728 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.595658  293728 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:06:32.595716  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.624247  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.694342  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:32.746370  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 10:06:32.746390  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 10:06:32.765644  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.786998  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 10:06:32.787020  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 10:06:32.804870  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.820938  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 10:06:32.821012  293728 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 10:06:32.877095  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 10:06:32.877165  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 10:06:32.903565  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 10:06:32.903593  293728 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 10:06:32.916625  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 10:06:32.916699  293728 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 10:06:32.930049  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 10:06:32.930072  293728 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 10:06:32.943222  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 10:06:32.943248  293728 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 10:06:32.958124  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:32.958148  293728 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 10:06:32.971454  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:33.482958  293728 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:06:33.483036  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:33.483155  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483183  293728 retry.go:31] will retry after 318.519734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483231  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483244  293728 retry.go:31] will retry after 239.813026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483501  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483518  293728 retry.go:31] will retry after 128.431008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.612510  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:33.679631  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.679670  293728 retry.go:31] will retry after 494.781452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.723639  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:33.790368  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.790401  293728 retry.go:31] will retry after 373.145908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.802573  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:33.864526  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.864571  293728 retry.go:31] will retry after 555.783365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.983818  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.164188  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:34.174768  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:34.315072  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.315120  293728 retry.go:31] will retry after 679.653646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:34.319455  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.319548  293728 retry.go:31] will retry after 695.531102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.421513  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:34.483690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:34.487662  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.487697  293728 retry.go:31] will retry after 692.225187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.983561  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.995819  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:35.016010  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:35.122122  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.122225  293728 retry.go:31] will retry after 1.142566381s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.138887  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.138925  293728 retry.go:31] will retry after 649.678663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.180839  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:31.222846  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:33.722513  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:35.247363  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.247415  293728 retry.go:31] will retry after 580.881907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.483771  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:35.788736  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:35.829213  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:35.856520  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.856598  293728 retry.go:31] will retry after 1.553154314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.896812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.896844  293728 retry.go:31] will retry after 933.683215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.984035  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.265085  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:36.326884  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.326918  293728 retry.go:31] will retry after 708.086155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.484141  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.831542  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:36.897118  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.897156  293728 retry.go:31] will retry after 1.33074055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.983504  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.035538  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:37.096009  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.096042  293728 retry.go:31] will retry after 1.790090237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.410554  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:37.480541  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.480578  293728 retry.go:31] will retry after 966.279559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.483641  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.984118  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:38.228242  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:38.293907  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.293942  293728 retry.go:31] will retry after 2.616205885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.447170  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:38.483864  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:38.514147  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.514181  293728 retry.go:31] will retry after 2.714109668s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.886857  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:38.951997  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.952029  293728 retry.go:31] will retry after 2.462359856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.983614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.483264  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.983242  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:35.723224  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:38.222673  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:40.483248  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:40.910479  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:40.983819  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:40.985785  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:40.985821  293728 retry.go:31] will retry after 2.652074408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.229298  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:41.298980  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.299018  293728 retry.go:31] will retry after 3.795353676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.415143  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:41.478696  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.478758  293728 retry.go:31] will retry after 5.28721939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.483845  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:41.983945  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.483250  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.483241  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.638309  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:43.697835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.697874  293728 retry.go:31] will retry after 4.887793633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.983195  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.483546  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.983775  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.095370  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:45.192562  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:45.192602  293728 retry.go:31] will retry after 8.015655906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:40.722829  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:42.723326  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:45.223605  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:45.483497  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.984044  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.483220  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.766179  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:46.829923  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.829956  293728 retry.go:31] will retry after 4.667102636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.984011  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.483312  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.984058  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.586389  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:48.650814  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.650848  293728 retry.go:31] will retry after 13.339615646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.983299  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.483453  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.983414  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:47.722614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:50.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:50.483943  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:50.983588  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.497329  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:51.584226  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.584262  293728 retry.go:31] will retry after 10.765270657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.983783  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.484023  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.983169  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.208585  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:53.275063  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.275124  293728 retry.go:31] will retry after 12.265040886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.983886  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.483520  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.983246  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:52.722502  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:54.722548  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:55.484066  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:55.983753  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.483532  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.983522  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.483514  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.983263  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.483994  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.983173  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.483759  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.983187  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:56.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:58.723298  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:00.483755  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:00.984174  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.983995  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.991432  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:02.091463  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.091500  293728 retry.go:31] will retry after 13.890333948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.349878  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:02.411835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.411870  293728 retry.go:31] will retry after 7.977295138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.483150  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:02.983902  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.483778  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.983278  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.483894  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.983934  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:01.222997  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:03.722642  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:05.483794  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:05.540834  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:05.606800  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.606832  293728 retry.go:31] will retry after 11.29369971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.983418  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.983887  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.483439  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.984054  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.483236  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.983521  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.483231  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:06.222598  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:08.222649  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:10.390061  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:10.460795  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.460828  293728 retry.go:31] will retry after 24.523063216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.483989  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:10.983508  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.483968  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.983921  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.983503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.483736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.983533  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.483788  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.983198  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:10.722891  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:13.222531  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:15.223567  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:15.483180  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:15.982114  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:07:15.983591  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:16.054278  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.054318  293728 retry.go:31] will retry after 20.338606766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.484114  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:16.901533  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:07:16.984157  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:17.001960  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.001998  293728 retry.go:31] will retry after 24.827417164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.483261  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:17.983420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.983281  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.483741  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.983176  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:17.722636  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:20.222572  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:20.483695  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:20.983984  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.483862  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.483812  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.983632  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.483796  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.984175  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:22.222705  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:24.723752  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:25.483633  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:25.984006  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.483830  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.983203  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.483211  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.983237  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.484156  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.983736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.483880  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.984116  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:27.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:29.223485  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:30.483549  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:30.983243  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.483786  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.983608  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:32.483844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:32.483952  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:32.508469  293728 cri.go:89] found id: ""
	I1206 10:07:32.508497  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.508505  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:32.508512  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:32.508574  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:32.533265  293728 cri.go:89] found id: ""
	I1206 10:07:32.533288  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.533297  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:32.533303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:32.533364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:32.562655  293728 cri.go:89] found id: ""
	I1206 10:07:32.562686  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.562695  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:32.562702  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:32.562769  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:32.587755  293728 cri.go:89] found id: ""
	I1206 10:07:32.587781  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.587789  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:32.587796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:32.587855  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:32.613253  293728 cri.go:89] found id: ""
	I1206 10:07:32.613284  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.613292  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:32.613305  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:32.613364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:32.638621  293728 cri.go:89] found id: ""
	I1206 10:07:32.638648  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.638656  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:32.638662  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:32.638775  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:32.663624  293728 cri.go:89] found id: ""
	I1206 10:07:32.663649  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.663657  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:32.663664  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:32.663724  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:32.687850  293728 cri.go:89] found id: ""
	I1206 10:07:32.687872  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.687881  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:32.687890  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:32.687901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:32.763755  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:32.763831  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:32.788174  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:32.788242  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:32.866103  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:32.866126  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:32.866138  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:32.891711  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:32.891745  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:34.985041  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:35.094954  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:35.094988  293728 retry.go:31] will retry after 34.21540436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:07:31.722556  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:33.722685  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:35.421586  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:35.432096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:35.432164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:35.457419  293728 cri.go:89] found id: ""
	I1206 10:07:35.457442  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.457451  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:35.457457  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:35.457520  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:35.481490  293728 cri.go:89] found id: ""
	I1206 10:07:35.481513  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.481521  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:35.481527  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:35.481586  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:35.506409  293728 cri.go:89] found id: ""
	I1206 10:07:35.506432  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.506441  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:35.506447  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:35.506512  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:35.534896  293728 cri.go:89] found id: ""
	I1206 10:07:35.534923  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.534932  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:35.534939  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:35.534997  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:35.560020  293728 cri.go:89] found id: ""
	I1206 10:07:35.560043  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.560052  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:35.560058  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:35.560115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:35.584963  293728 cri.go:89] found id: ""
	I1206 10:07:35.585028  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.585042  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:35.585049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:35.585110  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:35.617464  293728 cri.go:89] found id: ""
	I1206 10:07:35.617487  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.617495  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:35.617501  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:35.617562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:35.642187  293728 cri.go:89] found id: ""
	I1206 10:07:35.642219  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.642228  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:35.642238  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:35.642250  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:35.655709  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:35.655738  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:35.728266  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:35.728336  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:35.728379  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:35.766222  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:35.766301  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:35.823000  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:35.823024  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:36.393185  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:36.458951  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:36.458990  293728 retry.go:31] will retry after 24.220809087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:38.379270  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:38.389923  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:38.389993  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:38.416450  293728 cri.go:89] found id: ""
	I1206 10:07:38.416517  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.416540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:38.416558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:38.416635  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:38.442635  293728 cri.go:89] found id: ""
	I1206 10:07:38.442663  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.442672  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:38.442680  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:38.442742  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:38.469797  293728 cri.go:89] found id: ""
	I1206 10:07:38.469824  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.469834  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:38.469840  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:38.469899  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:38.497073  293728 cri.go:89] found id: ""
	I1206 10:07:38.497098  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.497107  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:38.497113  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:38.497194  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:38.527432  293728 cri.go:89] found id: ""
	I1206 10:07:38.527465  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.527474  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:38.527481  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:38.527540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:38.554253  293728 cri.go:89] found id: ""
	I1206 10:07:38.554278  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.554290  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:38.554300  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:38.554368  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:38.580022  293728 cri.go:89] found id: ""
	I1206 10:07:38.580070  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.580080  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:38.580087  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:38.580165  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:38.604967  293728 cri.go:89] found id: ""
	I1206 10:07:38.604992  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.605001  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:38.605010  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:38.605041  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:38.672012  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:38.672044  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:38.672075  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:38.697533  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:38.697567  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:38.750151  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:38.750176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:38.835463  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:38.835500  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:07:35.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:38.222743  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:41.350690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:41.361865  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:41.361934  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:41.387755  293728 cri.go:89] found id: ""
	I1206 10:07:41.387781  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.387789  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:41.387796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:41.387854  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:41.412482  293728 cri.go:89] found id: ""
	I1206 10:07:41.412510  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.412519  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:41.412526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:41.412591  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:41.437604  293728 cri.go:89] found id: ""
	I1206 10:07:41.437635  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.437644  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:41.437650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:41.437722  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:41.462503  293728 cri.go:89] found id: ""
	I1206 10:07:41.462573  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.462597  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:41.462616  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:41.462703  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:41.487720  293728 cri.go:89] found id: ""
	I1206 10:07:41.487742  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.487750  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:41.487757  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:41.487819  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:41.513291  293728 cri.go:89] found id: ""
	I1206 10:07:41.513321  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.513332  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:41.513342  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:41.513420  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:41.547109  293728 cri.go:89] found id: ""
	I1206 10:07:41.547132  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.547141  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:41.547147  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:41.547209  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:41.572514  293728 cri.go:89] found id: ""
	I1206 10:07:41.572585  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.572607  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:41.572628  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:41.572669  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:41.629345  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:41.629378  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:41.643897  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:41.643928  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:41.713946  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:41.714006  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:41.714025  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:41.745589  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:41.745645  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:41.830134  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:41.893553  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:41.893593  293728 retry.go:31] will retry after 44.351115962s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:44.324517  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:44.335432  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:44.335507  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:44.365594  293728 cri.go:89] found id: ""
	I1206 10:07:44.365621  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.365630  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:44.365637  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:44.365723  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:44.390876  293728 cri.go:89] found id: ""
	I1206 10:07:44.390909  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.390919  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:44.390944  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:44.391026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:44.421424  293728 cri.go:89] found id: ""
	I1206 10:07:44.421448  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.421462  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:44.421468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:44.421525  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:44.445299  293728 cri.go:89] found id: ""
	I1206 10:07:44.445325  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.445335  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:44.445341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:44.445454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:44.473977  293728 cri.go:89] found id: ""
	I1206 10:07:44.473999  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.474008  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:44.474014  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:44.474072  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:44.501273  293728 cri.go:89] found id: ""
	I1206 10:07:44.501299  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.501308  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:44.501341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:44.501415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:44.525106  293728 cri.go:89] found id: ""
	I1206 10:07:44.525136  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.525154  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:44.525161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:44.525223  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:44.550546  293728 cri.go:89] found id: ""
	I1206 10:07:44.550571  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.550580  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:44.550589  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:44.550600  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:44.615941  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:44.615962  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:44.615975  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:44.641346  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:44.641377  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:44.669493  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:44.669520  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:44.727196  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:44.727357  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:07:40.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:43.222679  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:45.222775  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:47.260652  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:47.271164  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:47.271238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:47.295481  293728 cri.go:89] found id: ""
	I1206 10:07:47.295506  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.295515  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:47.295521  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:47.295581  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:47.321861  293728 cri.go:89] found id: ""
	I1206 10:07:47.321884  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.321892  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:47.321898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:47.321954  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:47.346071  293728 cri.go:89] found id: ""
	I1206 10:07:47.346094  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.346103  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:47.346110  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:47.346169  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:47.373210  293728 cri.go:89] found id: ""
	I1206 10:07:47.373234  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.373242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:47.373249  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:47.373312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:47.403706  293728 cri.go:89] found id: ""
	I1206 10:07:47.403729  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.403739  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:47.403745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:47.403810  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:47.433807  293728 cri.go:89] found id: ""
	I1206 10:07:47.433831  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.433840  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:47.433847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:47.433904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:47.462210  293728 cri.go:89] found id: ""
	I1206 10:07:47.462233  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.462241  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:47.462247  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:47.462308  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:47.486445  293728 cri.go:89] found id: ""
	I1206 10:07:47.486523  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.486546  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:47.486567  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:47.486597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:47.500083  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:47.500114  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:47.568637  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:47.568661  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:47.568683  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:47.598178  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:47.598213  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:47.629224  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:47.629249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.187574  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:47.727856  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:50.223331  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:50.198529  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:50.198609  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:50.224708  293728 cri.go:89] found id: ""
	I1206 10:07:50.224731  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.224738  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:50.224744  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:50.224806  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:50.253337  293728 cri.go:89] found id: ""
	I1206 10:07:50.253361  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.253370  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:50.253376  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:50.253433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:50.278723  293728 cri.go:89] found id: ""
	I1206 10:07:50.278750  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.278759  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:50.278766  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:50.278830  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:50.308736  293728 cri.go:89] found id: ""
	I1206 10:07:50.308803  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.308822  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:50.308834  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:50.308894  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:50.333136  293728 cri.go:89] found id: ""
	I1206 10:07:50.333162  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.333171  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:50.333177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:50.333263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:50.358071  293728 cri.go:89] found id: ""
	I1206 10:07:50.358105  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.358114  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:50.358137  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:50.358215  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:50.382078  293728 cri.go:89] found id: ""
	I1206 10:07:50.382111  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.382120  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:50.382141  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:50.382222  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:50.407225  293728 cri.go:89] found id: ""
	I1206 10:07:50.407261  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.407270  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:50.407279  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:50.407291  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.466553  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:50.466588  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:50.480420  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:50.480450  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:50.546503  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:50.546523  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:50.546546  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:50.573208  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:50.573243  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.100604  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:53.111611  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:53.111683  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:53.136465  293728 cri.go:89] found id: ""
	I1206 10:07:53.136494  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.136503  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:53.136510  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:53.136584  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:53.167397  293728 cri.go:89] found id: ""
	I1206 10:07:53.167419  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.167427  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:53.167433  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:53.167501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:53.191735  293728 cri.go:89] found id: ""
	I1206 10:07:53.191769  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.191778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:53.191784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:53.191849  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:53.216472  293728 cri.go:89] found id: ""
	I1206 10:07:53.216495  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.216506  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:53.216513  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:53.216570  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:53.242936  293728 cri.go:89] found id: ""
	I1206 10:07:53.242957  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.242966  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:53.242972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:53.243035  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:53.274015  293728 cri.go:89] found id: ""
	I1206 10:07:53.274041  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.274050  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:53.274056  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:53.274118  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:53.303348  293728 cri.go:89] found id: ""
	I1206 10:07:53.303371  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.303415  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:53.303422  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:53.303486  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:53.332691  293728 cri.go:89] found id: ""
	I1206 10:07:53.332716  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.332724  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:53.332733  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:53.332749  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:53.346274  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:53.346303  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:53.412178  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:53.412203  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:53.412216  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:53.437974  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:53.438008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.469789  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:53.469816  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:07:52.723301  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:55.222438  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:56.029614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:56.044312  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:56.044385  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:56.074035  293728 cri.go:89] found id: ""
	I1206 10:07:56.074061  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.074071  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:56.074077  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:56.074137  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:56.101362  293728 cri.go:89] found id: ""
	I1206 10:07:56.101387  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.101397  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:56.101403  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:56.101472  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:56.132837  293728 cri.go:89] found id: ""
	I1206 10:07:56.132867  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.132876  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:56.132882  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:56.132949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:56.162095  293728 cri.go:89] found id: ""
	I1206 10:07:56.162121  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.162129  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:56.162136  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:56.162195  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:56.190088  293728 cri.go:89] found id: ""
	I1206 10:07:56.190113  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.190122  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:56.190128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:56.190188  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:56.217327  293728 cri.go:89] found id: ""
	I1206 10:07:56.217355  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.217365  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:56.217372  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:56.217432  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:56.242210  293728 cri.go:89] found id: ""
	I1206 10:07:56.242246  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.242255  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:56.242261  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:56.242330  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:56.266843  293728 cri.go:89] found id: ""
	I1206 10:07:56.266871  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.266879  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:56.266888  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:56.266900  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:56.324906  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:56.324941  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:56.339074  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:56.339111  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:56.407395  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:56.407417  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:56.407434  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:56.433408  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:56.433442  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:58.962420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:58.984606  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:58.984688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:59.037604  293728 cri.go:89] found id: ""
	I1206 10:07:59.037795  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.038054  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:59.038096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:59.038236  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:59.074512  293728 cri.go:89] found id: ""
	I1206 10:07:59.074555  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.074564  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:59.074571  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:59.074638  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:59.101868  293728 cri.go:89] found id: ""
	I1206 10:07:59.101895  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.101904  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:59.101910  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:59.101973  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:59.127188  293728 cri.go:89] found id: ""
	I1206 10:07:59.127214  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.127223  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:59.127230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:59.127286  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:59.152234  293728 cri.go:89] found id: ""
	I1206 10:07:59.152259  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.152268  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:59.152274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:59.152342  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:59.177629  293728 cri.go:89] found id: ""
	I1206 10:07:59.177654  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.177663  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:59.177670  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:59.177728  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:59.202156  293728 cri.go:89] found id: ""
	I1206 10:07:59.202185  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.202195  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:59.202201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:59.202261  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:59.227130  293728 cri.go:89] found id: ""
	I1206 10:07:59.227165  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.227174  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:59.227183  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:59.227204  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:59.241522  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:59.241597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:59.311704  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:59.311730  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:59.311742  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:59.337213  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:59.337246  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:59.365911  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:59.365940  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:07:57.222678  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:59.223226  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:00.680788  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:08:00.745958  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:00.746077  293728 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:01.925540  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:01.936468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:01.936592  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:01.965164  293728 cri.go:89] found id: ""
	I1206 10:08:01.965242  293728 logs.go:282] 0 containers: []
	W1206 10:08:01.965277  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:01.965302  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:01.965393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:02.013736  293728 cri.go:89] found id: ""
	I1206 10:08:02.013774  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.013783  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:02.013790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:02.013862  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:02.058535  293728 cri.go:89] found id: ""
	I1206 10:08:02.058627  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.058651  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:02.058685  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:02.058798  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:02.091149  293728 cri.go:89] found id: ""
	I1206 10:08:02.091213  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.091242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:02.091286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:02.091460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:02.116844  293728 cri.go:89] found id: ""
	I1206 10:08:02.116870  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.116878  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:02.116884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:02.116945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:02.143338  293728 cri.go:89] found id: ""
	I1206 10:08:02.143439  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.143463  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:02.143485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:02.143573  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:02.169310  293728 cri.go:89] found id: ""
	I1206 10:08:02.169333  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.169342  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:02.169348  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:02.169410  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:02.200025  293728 cri.go:89] found id: ""
	I1206 10:08:02.200096  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.200104  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:02.200113  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:02.200125  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:02.257304  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:02.257340  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:02.271507  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:02.271541  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:02.341058  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:02.341084  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:02.341097  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:02.367636  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:02.367672  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:04.899503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:04.910154  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:04.910231  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:04.934598  293728 cri.go:89] found id: ""
	I1206 10:08:04.934623  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.934632  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:04.934638  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:04.934699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:04.959971  293728 cri.go:89] found id: ""
	I1206 10:08:04.959995  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.960004  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:04.960010  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:04.960071  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:05.027645  293728 cri.go:89] found id: ""
	I1206 10:08:05.027668  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.027677  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:05.027683  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:05.027758  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:05.077828  293728 cri.go:89] found id: ""
	I1206 10:08:05.077868  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.077878  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:05.077884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:05.077946  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:05.103986  293728 cri.go:89] found id: ""
	I1206 10:08:05.104014  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.104023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:05.104029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:05.104091  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:05.129703  293728 cri.go:89] found id: ""
	I1206 10:08:05.129778  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.129822  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:05.129843  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:05.129930  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:05.156958  293728 cri.go:89] found id: ""
	I1206 10:08:05.156982  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.156990  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:05.156996  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:05.157058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:05.182537  293728 cri.go:89] found id: ""
	I1206 10:08:05.182565  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.182575  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:05.182585  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:05.182598  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:08:01.722650  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:04.222533  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:05.196389  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:05.196419  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:05.262239  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:05.262265  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:05.262278  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:05.288138  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:05.288178  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:05.316468  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:05.316497  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:07.872986  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:07.886594  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:07.886666  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:07.912554  293728 cri.go:89] found id: ""
	I1206 10:08:07.912580  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.912589  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:07.912595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:07.912668  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:07.938006  293728 cri.go:89] found id: ""
	I1206 10:08:07.938033  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.938042  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:07.938049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:07.938107  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:07.967969  293728 cri.go:89] found id: ""
	I1206 10:08:07.967995  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.968004  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:07.968011  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:07.968079  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:08.001472  293728 cri.go:89] found id: ""
	I1206 10:08:08.001495  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.001504  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:08.001511  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:08.001577  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:08.064509  293728 cri.go:89] found id: ""
	I1206 10:08:08.064538  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.064547  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:08.064554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:08.064612  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:08.094308  293728 cri.go:89] found id: ""
	I1206 10:08:08.094376  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.094402  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:08.094434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:08.094522  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:08.124650  293728 cri.go:89] found id: ""
	I1206 10:08:08.124695  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.124705  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:08.124712  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:08.124782  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:08.150816  293728 cri.go:89] found id: ""
	I1206 10:08:08.150851  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.150860  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:08.150868  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:08.150879  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:08.207170  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:08.207203  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:08.220834  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:08.220860  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:08.285113  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:08.285138  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:08.285153  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:08.311342  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:08.311548  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:09.310714  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:08:09.371609  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:09.371709  293728 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 10:08:06.222644  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:08.722561  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:10.840228  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:10.850847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:10.850914  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:10.881439  293728 cri.go:89] found id: ""
	I1206 10:08:10.881517  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.881540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:10.881555  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:10.881629  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:10.910942  293728 cri.go:89] found id: ""
	I1206 10:08:10.910971  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.910980  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:10.910987  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:10.911049  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:10.936471  293728 cri.go:89] found id: ""
	I1206 10:08:10.936495  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.936503  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:10.936509  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:10.936566  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:10.964540  293728 cri.go:89] found id: ""
	I1206 10:08:10.964567  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.964575  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:10.964581  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:10.964650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:11.035295  293728 cri.go:89] found id: ""
	I1206 10:08:11.035322  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.035332  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:11.035354  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:11.035433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:11.081240  293728 cri.go:89] found id: ""
	I1206 10:08:11.081266  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.081275  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:11.081282  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:11.081347  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:11.109502  293728 cri.go:89] found id: ""
	I1206 10:08:11.109543  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.109554  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:11.109561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:11.109625  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:11.138072  293728 cri.go:89] found id: ""
	I1206 10:08:11.138100  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.138113  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:11.138122  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:11.138134  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:11.207996  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:11.208060  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:11.208081  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:11.234490  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:11.234525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:11.263495  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:11.263525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:11.323991  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:11.324034  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:13.838014  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:13.849112  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:13.849181  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:13.873403  293728 cri.go:89] found id: ""
	I1206 10:08:13.873472  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.873498  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:13.873515  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:13.873602  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:13.900596  293728 cri.go:89] found id: ""
	I1206 10:08:13.900616  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.900625  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:13.900631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:13.900694  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:13.925385  293728 cri.go:89] found id: ""
	I1206 10:08:13.925409  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.925417  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:13.925424  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:13.925481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:13.950796  293728 cri.go:89] found id: ""
	I1206 10:08:13.950823  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.950837  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:13.950844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:13.950902  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:14.028934  293728 cri.go:89] found id: ""
	I1206 10:08:14.028964  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.028973  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:14.028979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:14.029058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:14.063925  293728 cri.go:89] found id: ""
	I1206 10:08:14.063948  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.063957  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:14.063963  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:14.064024  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:14.091439  293728 cri.go:89] found id: ""
	I1206 10:08:14.091465  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.091473  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:14.091480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:14.091556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:14.116453  293728 cri.go:89] found id: ""
	I1206 10:08:14.116476  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.116485  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:14.116494  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:14.116506  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:14.173576  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:14.173615  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:14.187707  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:14.187736  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:14.256417  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:14.256440  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:14.256452  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:14.281458  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:14.281490  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:10.722908  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:13.223465  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:16.809300  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:16.820406  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:16.820481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:16.845040  293728 cri.go:89] found id: ""
	I1206 10:08:16.845105  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.845130  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:16.845144  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:16.845217  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:16.875450  293728 cri.go:89] found id: ""
	I1206 10:08:16.875475  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.875484  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:16.875500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:16.875562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:16.902002  293728 cri.go:89] found id: ""
	I1206 10:08:16.902048  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.902059  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:16.902068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:16.902146  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:16.927319  293728 cri.go:89] found id: ""
	I1206 10:08:16.927353  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.927361  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:16.927368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:16.927466  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:16.952239  293728 cri.go:89] found id: ""
	I1206 10:08:16.952265  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.952273  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:16.952280  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:16.952386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:16.994322  293728 cri.go:89] found id: ""
	I1206 10:08:16.994351  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.994360  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:16.994368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:16.994437  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:17.032079  293728 cri.go:89] found id: ""
	I1206 10:08:17.032113  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.032122  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:17.032128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:17.032201  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:17.079256  293728 cri.go:89] found id: ""
	I1206 10:08:17.079321  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.079343  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:17.079364  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:17.079406  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:17.104677  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:17.104707  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:17.136676  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:17.136701  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:17.195915  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:17.195950  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:17.209626  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:17.209653  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:17.278745  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:19.780767  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:19.791658  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:19.791756  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:19.820516  293728 cri.go:89] found id: ""
	I1206 10:08:19.820539  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.820547  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:19.820554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:19.820652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:19.845473  293728 cri.go:89] found id: ""
	I1206 10:08:19.845499  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.845507  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:19.845514  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:19.845572  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:19.871555  293728 cri.go:89] found id: ""
	I1206 10:08:19.871580  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.871592  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:19.871598  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:19.871658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:19.902754  293728 cri.go:89] found id: ""
	I1206 10:08:19.902778  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.902787  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:19.902793  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:19.902853  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:19.927447  293728 cri.go:89] found id: ""
	I1206 10:08:19.927473  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.927482  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:19.927489  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:19.927549  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:19.951607  293728 cri.go:89] found id: ""
	I1206 10:08:19.951634  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.951644  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:19.951651  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:19.951718  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:20.023839  293728 cri.go:89] found id: ""
	I1206 10:08:20.023868  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.023879  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:20.023886  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:20.023951  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:20.064702  293728 cri.go:89] found id: ""
	I1206 10:08:20.064730  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.064739  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:20.064748  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:20.064761  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:20.131531  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:20.131555  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:20.131566  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:20.157955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:20.157991  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:20.188100  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:20.188126  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:15.723287  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:18.223318  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:20.248399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:20.248437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:22.762476  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:22.774338  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:22.774408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:22.803197  293728 cri.go:89] found id: ""
	I1206 10:08:22.803220  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.803228  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:22.803234  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:22.803292  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:22.828985  293728 cri.go:89] found id: ""
	I1206 10:08:22.829009  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.829018  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:22.829024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:22.829084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:22.857670  293728 cri.go:89] found id: ""
	I1206 10:08:22.857695  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.857704  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:22.857710  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:22.857770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:22.886863  293728 cri.go:89] found id: ""
	I1206 10:08:22.886889  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.886898  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:22.886905  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:22.886967  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:22.912046  293728 cri.go:89] found id: ""
	I1206 10:08:22.912072  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.912080  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:22.912086  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:22.912149  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:22.940438  293728 cri.go:89] found id: ""
	I1206 10:08:22.940516  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.940530  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:22.940538  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:22.940597  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:22.965932  293728 cri.go:89] found id: ""
	I1206 10:08:22.965957  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.965966  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:22.965973  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:22.966034  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:23.036167  293728 cri.go:89] found id: ""
	I1206 10:08:23.036194  293728 logs.go:282] 0 containers: []
	W1206 10:08:23.036203  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:23.036212  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:23.036224  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:23.054454  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:23.054481  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:23.120660  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:23.120680  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:23.120692  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:23.146879  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:23.146913  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:23.177356  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:23.177389  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:20.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:23.222550  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:25.739842  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:25.751155  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:25.751238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:25.781790  293728 cri.go:89] found id: ""
	I1206 10:08:25.781813  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.781821  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:25.781828  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:25.781884  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:25.809915  293728 cri.go:89] found id: ""
	I1206 10:08:25.809940  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.809948  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:25.809954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:25.810014  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:25.840293  293728 cri.go:89] found id: ""
	I1206 10:08:25.840318  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.840327  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:25.840334  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:25.840390  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:25.869368  293728 cri.go:89] found id: ""
	I1206 10:08:25.869401  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.869410  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:25.869416  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:25.869488  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:25.898302  293728 cri.go:89] found id: ""
	I1206 10:08:25.898335  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.898344  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:25.898351  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:25.898417  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:25.925837  293728 cri.go:89] found id: ""
	I1206 10:08:25.925864  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.925873  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:25.925880  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:25.925940  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:25.950501  293728 cri.go:89] found id: ""
	I1206 10:08:25.950537  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.950546  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:25.950552  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:25.950618  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:26.003264  293728 cri.go:89] found id: ""
	I1206 10:08:26.003294  293728 logs.go:282] 0 containers: []
	W1206 10:08:26.003305  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:26.003316  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:26.003327  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:26.046472  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:26.046503  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:26.091770  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:26.091798  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:26.148719  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:26.148755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:26.165689  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:26.165733  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:26.231230  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:26.245490  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:08:26.310812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:26.310914  293728 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:26.314238  293728 out.go:179] * Enabled addons: 
	I1206 10:08:26.317143  293728 addons.go:530] duration metric: took 1m53.881766525s for enable addons: enabled=[]
	I1206 10:08:28.731518  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:28.742380  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:28.742460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:28.768392  293728 cri.go:89] found id: ""
	I1206 10:08:28.768416  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.768425  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:28.768431  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:28.768489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:28.795017  293728 cri.go:89] found id: ""
	I1206 10:08:28.795043  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.795052  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:28.795059  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:28.795130  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:28.831707  293728 cri.go:89] found id: ""
	I1206 10:08:28.831734  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.831742  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:28.831748  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:28.831807  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:28.857267  293728 cri.go:89] found id: ""
	I1206 10:08:28.857293  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.857304  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:28.857317  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:28.857415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:28.887732  293728 cri.go:89] found id: ""
	I1206 10:08:28.887754  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.887762  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:28.887769  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:28.887827  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:28.912905  293728 cri.go:89] found id: ""
	I1206 10:08:28.912970  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.912984  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:28.912992  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:28.913051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:28.937740  293728 cri.go:89] found id: ""
	I1206 10:08:28.937764  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.937774  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:28.937781  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:28.937840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:28.964042  293728 cri.go:89] found id: ""
	I1206 10:08:28.964111  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.964126  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:28.964135  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:28.964147  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:29.034399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:29.034439  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:29.059150  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:29.059176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:29.134200  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:29.134222  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:29.134235  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:29.160868  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:29.160901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:25.722683  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:27.723593  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:30.222645  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:31.689201  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:31.700497  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:31.700569  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:31.726402  293728 cri.go:89] found id: ""
	I1206 10:08:31.726426  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.726434  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:31.726441  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:31.726503  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:31.752620  293728 cri.go:89] found id: ""
	I1206 10:08:31.752644  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.752652  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:31.752659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:31.752720  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:31.778722  293728 cri.go:89] found id: ""
	I1206 10:08:31.778749  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.778758  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:31.778764  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:31.778825  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:31.804730  293728 cri.go:89] found id: ""
	I1206 10:08:31.804754  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.804762  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:31.804768  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:31.804828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:31.834276  293728 cri.go:89] found id: ""
	I1206 10:08:31.834303  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.834312  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:31.834322  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:31.834388  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:31.859721  293728 cri.go:89] found id: ""
	I1206 10:08:31.859744  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.859752  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:31.859759  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:31.859889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:31.888679  293728 cri.go:89] found id: ""
	I1206 10:08:31.888746  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.888760  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:31.888767  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:31.888828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:31.915769  293728 cri.go:89] found id: ""
	I1206 10:08:31.915794  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.915804  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:31.915812  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:31.915825  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:31.929129  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:31.929155  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:32.017380  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:32.017406  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:32.017420  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:32.046135  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:32.046218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:32.081462  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:32.081485  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.642406  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:34.653187  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:34.653263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:34.683091  293728 cri.go:89] found id: ""
	I1206 10:08:34.683116  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.683124  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:34.683130  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:34.683189  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:34.709426  293728 cri.go:89] found id: ""
	I1206 10:08:34.709453  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.709462  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:34.709468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:34.709528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:34.740189  293728 cri.go:89] found id: ""
	I1206 10:08:34.740215  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.740223  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:34.740230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:34.740289  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:34.769902  293728 cri.go:89] found id: ""
	I1206 10:08:34.769932  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.769942  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:34.769954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:34.770026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:34.797331  293728 cri.go:89] found id: ""
	I1206 10:08:34.797358  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.797367  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:34.797374  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:34.797434  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:34.823286  293728 cri.go:89] found id: ""
	I1206 10:08:34.823309  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.823318  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:34.823324  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:34.823406  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:34.849130  293728 cri.go:89] found id: ""
	I1206 10:08:34.849153  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.849162  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:34.849168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:34.849229  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:34.873883  293728 cri.go:89] found id: ""
	I1206 10:08:34.873905  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.873913  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:34.873922  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:34.873933  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.929942  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:34.929976  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:34.944124  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:34.944205  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:35.057155  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:35.057180  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:35.057193  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:35.090699  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:35.090741  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:32.223260  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:34.723506  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:37.620713  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:37.631409  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:37.631478  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:37.668926  293728 cri.go:89] found id: ""
	I1206 10:08:37.668949  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.668958  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:37.668966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:37.669025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:37.698809  293728 cri.go:89] found id: ""
	I1206 10:08:37.698831  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.698840  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:37.698846  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:37.698905  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:37.726123  293728 cri.go:89] found id: ""
	I1206 10:08:37.726146  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.726155  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:37.726161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:37.726219  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:37.750745  293728 cri.go:89] found id: ""
	I1206 10:08:37.750818  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.750842  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:37.750861  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:37.750945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:37.777744  293728 cri.go:89] found id: ""
	I1206 10:08:37.777814  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.777837  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:37.777857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:37.777945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:37.804124  293728 cri.go:89] found id: ""
	I1206 10:08:37.804151  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.804160  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:37.804166  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:37.804243  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:37.828930  293728 cri.go:89] found id: ""
	I1206 10:08:37.828995  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.829010  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:37.829017  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:37.829076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:37.853436  293728 cri.go:89] found id: ""
	I1206 10:08:37.853459  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.853468  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:37.853476  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:37.853493  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:37.910673  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:37.910709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:37.926464  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:37.926504  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:38.046192  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:38.046217  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:38.046230  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:38.078770  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:38.078805  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:37.222544  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:39.222587  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:40.613605  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:40.624180  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:40.624256  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:40.648680  293728 cri.go:89] found id: ""
	I1206 10:08:40.648706  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.648715  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:40.648721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:40.648783  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:40.674691  293728 cri.go:89] found id: ""
	I1206 10:08:40.674716  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.674725  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:40.674732  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:40.674802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:40.700970  293728 cri.go:89] found id: ""
	I1206 10:08:40.700997  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.701006  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:40.701013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:40.701076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:40.729911  293728 cri.go:89] found id: ""
	I1206 10:08:40.729940  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.729949  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:40.729956  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:40.730020  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:40.755581  293728 cri.go:89] found id: ""
	I1206 10:08:40.755611  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.755620  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:40.755626  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:40.755686  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:40.781938  293728 cri.go:89] found id: ""
	I1206 10:08:40.782007  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.782030  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:40.782051  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:40.782139  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:40.811855  293728 cri.go:89] found id: ""
	I1206 10:08:40.811880  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.811889  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:40.811895  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:40.811961  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:40.841527  293728 cri.go:89] found id: ""
	I1206 10:08:40.841553  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.841562  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:40.841571  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:40.841583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:40.854956  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:40.854983  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:40.924783  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:40.924807  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:40.924823  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:40.950611  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:40.950646  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:41.021978  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:41.022008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:43.596447  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:43.607463  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:43.607540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:43.632638  293728 cri.go:89] found id: ""
	I1206 10:08:43.632660  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.632668  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:43.632675  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:43.632737  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:43.657538  293728 cri.go:89] found id: ""
	I1206 10:08:43.657616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.657632  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:43.657639  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:43.657711  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:43.683595  293728 cri.go:89] found id: ""
	I1206 10:08:43.683621  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.683630  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:43.683636  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:43.683706  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:43.709348  293728 cri.go:89] found id: ""
	I1206 10:08:43.709371  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.709380  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:43.709387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:43.709451  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:43.734592  293728 cri.go:89] found id: ""
	I1206 10:08:43.734616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.734625  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:43.734631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:43.734689  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:43.761297  293728 cri.go:89] found id: ""
	I1206 10:08:43.761362  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.761387  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:43.761405  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:43.761493  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:43.789795  293728 cri.go:89] found id: ""
	I1206 10:08:43.789831  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.789840  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:43.789847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:43.789919  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:43.817708  293728 cri.go:89] found id: ""
	I1206 10:08:43.817735  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.817744  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:43.817762  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:43.817774  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:43.831448  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:43.831483  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:43.897033  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:43.897107  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:43.897131  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:43.922955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:43.922990  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:43.960423  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:43.960457  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:41.722543  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:43.723229  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:46.534389  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:46.545120  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:46.545205  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:46.570287  293728 cri.go:89] found id: ""
	I1206 10:08:46.570313  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.570322  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:46.570328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:46.570391  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:46.600524  293728 cri.go:89] found id: ""
	I1206 10:08:46.600609  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.600631  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:46.600650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:46.600734  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:46.627292  293728 cri.go:89] found id: ""
	I1206 10:08:46.627314  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.627322  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:46.627328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:46.627424  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:46.652620  293728 cri.go:89] found id: ""
	I1206 10:08:46.652642  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.652651  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:46.652657  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:46.652716  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:46.681992  293728 cri.go:89] found id: ""
	I1206 10:08:46.682015  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.682023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:46.682029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:46.682087  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:46.708290  293728 cri.go:89] found id: ""
	I1206 10:08:46.708363  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.708408  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:46.708434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:46.708528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:46.737816  293728 cri.go:89] found id: ""
	I1206 10:08:46.737890  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.737915  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:46.737935  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:46.738021  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:46.768334  293728 cri.go:89] found id: ""
	I1206 10:08:46.768407  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.768430  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:46.768451  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:46.768491  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:46.782268  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:46.782344  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:46.850687  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:46.850714  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:46.850727  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:46.877310  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:46.877362  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:46.909345  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:46.909376  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:49.467346  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:49.477899  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:49.477971  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:49.502546  293728 cri.go:89] found id: ""
	I1206 10:08:49.502569  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.502578  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:49.502584  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:49.502646  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:49.527592  293728 cri.go:89] found id: ""
	I1206 10:08:49.527663  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.527686  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:49.527699  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:49.527760  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:49.553748  293728 cri.go:89] found id: ""
	I1206 10:08:49.553770  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.553778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:49.553784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:49.553841  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:49.580182  293728 cri.go:89] found id: ""
	I1206 10:08:49.580205  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.580214  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:49.580220  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:49.580285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:49.609009  293728 cri.go:89] found id: ""
	I1206 10:08:49.609034  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.609043  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:49.609050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:49.609114  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:49.634196  293728 cri.go:89] found id: ""
	I1206 10:08:49.634218  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.634227  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:49.634233  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:49.634293  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:49.660015  293728 cri.go:89] found id: ""
	I1206 10:08:49.660038  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.660047  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:49.660053  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:49.660115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:49.685329  293728 cri.go:89] found id: ""
	I1206 10:08:49.685355  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.685364  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:49.685373  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:49.685385  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:49.699189  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:49.699218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:49.768229  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:49.768253  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:49.768267  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:49.794221  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:49.794255  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:49.825320  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:49.825349  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:46.222859  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:48.223148  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:50.223492  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:52.381962  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:52.392897  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:52.392974  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:52.421172  293728 cri.go:89] found id: ""
	I1206 10:08:52.421197  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.421206  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:52.421212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:52.421276  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:52.449281  293728 cri.go:89] found id: ""
	I1206 10:08:52.449305  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.449313  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:52.449320  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:52.449378  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:52.474517  293728 cri.go:89] found id: ""
	I1206 10:08:52.474539  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.474547  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:52.474553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:52.474616  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:52.500435  293728 cri.go:89] found id: ""
	I1206 10:08:52.500458  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.500466  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:52.500473  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:52.500532  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:52.526935  293728 cri.go:89] found id: ""
	I1206 10:08:52.526957  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.526965  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:52.526972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:52.527031  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:52.553625  293728 cri.go:89] found id: ""
	I1206 10:08:52.553646  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.553654  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:52.553663  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:52.553721  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:52.580092  293728 cri.go:89] found id: ""
	I1206 10:08:52.580169  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.580194  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:52.580206  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:52.580269  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:52.609595  293728 cri.go:89] found id: ""
	I1206 10:08:52.609622  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.609631  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:52.609640  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:52.609658  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:52.666423  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:52.666460  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:52.680542  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:52.680572  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:52.745123  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:52.745142  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:52.745154  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:52.771578  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:52.771612  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:52.722479  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:54.722588  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:57.222560  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:58.722277  287962 node_ready.go:38] duration metric: took 6m0.000230261s for node "no-preload-257359" to be "Ready" ...
	I1206 10:08:58.725649  287962 out.go:203] 
	W1206 10:08:58.728547  287962 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:08:58.728572  287962 out.go:285] * 
	W1206 10:08:58.730704  287962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:08:58.733695  287962 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580837118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580855563Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580885274Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580900995Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580911087Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580921853Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580931149Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580945311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580962436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580998678Z" level=info msg="Connect containerd service"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.581274307Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.581881961Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598029541Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598099063Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598123851Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598177546Z" level=info msg="Start recovering state"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621091913Z" level=info msg="Start event monitor"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621277351Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621341351Z" level=info msg="Start streaming server"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621405639Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621464397Z" level=info msg="runtime interface starting up..."
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621515523Z" level=info msg="starting plugins..."
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621595705Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:02:56 no-preload-257359 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.623695007Z" level=info msg="containerd successfully booted in 0.067121s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:00.337002    3992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:00.339491    3992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:00.340327    3992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:00.341914    3992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:00.342465    3992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:09:00 up  1:51,  0 user,  load average: 0.31, 0.60, 1.42
	Linux no-preload-257359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:08:56 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:08:57 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 06 10:08:57 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:08:57 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:08:57 no-preload-257359 kubelet[3870]: E1206 10:08:57.526348    3870 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:08:57 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:08:57 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:08:58 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 06 10:08:58 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:08:58 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:08:58 no-preload-257359 kubelet[3875]: E1206 10:08:58.304198    3875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:08:58 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:08:58 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:08:58 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 06 10:08:58 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:08:58 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:08:59 no-preload-257359 kubelet[3880]: E1206 10:08:59.058067    3880 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:08:59 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:08:59 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:08:59 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 06 10:08:59 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:08:59 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:08:59 no-preload-257359 kubelet[3915]: E1206 10:08:59.816491    3915 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:08:59 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:08:59 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359: exit status 2 (355.035679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-257359" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (107.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-387337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1206 10:04:42.815556    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:05:10.518856    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:05:55.755008    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:06:00.404539    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:06:09.863084    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-387337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.405677905s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-387337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-387337
helpers_test.go:243: (dbg) docker inspect newest-cni-387337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	        "Created": "2025-12-06T09:56:17.358293629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:56:17.425249124Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hosts",
	        "LogPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9-json.log",
	        "Name": "/newest-cni-387337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-387337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-387337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	                "LowerDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-387337",
	                "Source": "/var/lib/docker/volumes/newest-cni-387337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-387337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-387337",
	                "name.minikube.sigs.k8s.io": "newest-cni-387337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19ea7d8f996048fa64d4d866afeea4320430f2f98edf98767d2a1c4c6ca3fe99",
	            "SandboxKey": "/var/run/docker/netns/19ea7d8f9960",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-387337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:5d:4c:44:a6:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f42a70d42248e7fb537c8957fc3c9ad0a04046b4da244cdde31b86ebc56a160b",
	                    "EndpointID": "c1491ff939cb05ddcbda7885723e4df86157bca2d9a03aa5f2a86896d137b8fa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-387337",
	                        "e89a14c7a996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337: exit status 6 (341.486873ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:06:22.358389  293204 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-387337" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-387337 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p old-k8s-version-587884                                                                                                                                                                                                                                  │ old-k8s-version-587884       │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ delete  │ -p disable-driver-mounts-507319                                                                                                                                                                                                                            │ disable-driver-mounts-507319 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ embed-certs-100767 image list --format=json                                                                                                                                                                                                                │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-257359 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-257359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-387337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:02:50
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:02:50.560309  287962 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:02:50.560438  287962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:02:50.560447  287962 out.go:374] Setting ErrFile to fd 2...
	I1206 10:02:50.560453  287962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:02:50.560700  287962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:02:50.561041  287962 out.go:368] Setting JSON to false
	I1206 10:02:50.561931  287962 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6322,"bootTime":1765009049,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:02:50.561998  287962 start.go:143] virtualization:  
	I1206 10:02:50.565075  287962 out.go:179] * [no-preload-257359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:02:50.569157  287962 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:02:50.569230  287962 notify.go:221] Checking for updates...
	I1206 10:02:50.575040  287962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:02:50.578100  287962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:02:50.581099  287962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:02:50.584049  287962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:02:50.587045  287962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:02:50.590515  287962 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:02:50.591076  287962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:02:50.613858  287962 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:02:50.613996  287962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:02:50.681770  287962 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:02:50.672313547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:02:50.681877  287962 docker.go:319] overlay module found
	I1206 10:02:50.685299  287962 out.go:179] * Using the docker driver based on existing profile
	I1206 10:02:50.688097  287962 start.go:309] selected driver: docker
	I1206 10:02:50.688133  287962 start.go:927] validating driver "docker" against &{Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:02:50.688234  287962 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:02:50.688955  287962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:02:50.763306  287962 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:02:50.754198972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:02:50.763670  287962 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:02:50.763694  287962 cni.go:84] Creating CNI manager for ""
	I1206 10:02:50.763755  287962 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:02:50.763787  287962 start.go:353] cluster config:
	{Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:02:50.767045  287962 out.go:179] * Starting "no-preload-257359" primary control-plane node in "no-preload-257359" cluster
	I1206 10:02:50.769839  287962 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:02:50.772658  287962 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:02:50.775524  287962 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:02:50.775664  287962 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/config.json ...
	I1206 10:02:50.776024  287962 cache.go:107] acquiring lock: {Name:mkad35cce177b57f018574c39ee8c3c239eb9b07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776116  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 10:02:50.776125  287962 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.204µs
	I1206 10:02:50.776138  287962 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 10:02:50.776152  287962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:02:50.776297  287962 cache.go:107] acquiring lock: {Name:mk5bfca67d26458a19d81fb604def77746df1eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776349  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 10:02:50.776357  287962 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 64.616µs
	I1206 10:02:50.776363  287962 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776373  287962 cache.go:107] acquiring lock: {Name:mk51ddffc8cf367c8f9ab9dab46cca9425ce4f0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776404  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 10:02:50.776409  287962 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.794µs
	I1206 10:02:50.776415  287962 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776424  287962 cache.go:107] acquiring lock: {Name:mkdb80297b5c34ff2c59c7d0547bc50e4c902573 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776457  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 10:02:50.776467  287962 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.57µs
	I1206 10:02:50.776475  287962 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776497  287962 cache.go:107] acquiring lock: {Name:mk507200c1f46ea68c0c2896fa231924d660663f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776525  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 10:02:50.776530  287962 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.002µs
	I1206 10:02:50.776536  287962 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 10:02:50.776545  287962 cache.go:107] acquiring lock: {Name:mkf308199b47415a211213857d6d1bca152d3eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776571  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 10:02:50.776576  287962 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 31.213µs
	I1206 10:02:50.776581  287962 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 10:02:50.776589  287962 cache.go:107] acquiring lock: {Name:mk5d1295ea377d97f7962ba416aea9d5b2908db5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776615  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1206 10:02:50.776620  287962 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.77µs
	I1206 10:02:50.776625  287962 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 10:02:50.776635  287962 cache.go:107] acquiring lock: {Name:mk2939303cfab712d7c12da37ef89ab2271b37f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.776664  287962 cache.go:115] /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 10:02:50.776668  287962 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 34.815µs
	I1206 10:02:50.776674  287962 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 10:02:50.776680  287962 cache.go:87] Successfully saved all images to host disk.
	I1206 10:02:50.798946  287962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:02:50.798971  287962 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:02:50.798991  287962 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:02:50.799021  287962 start.go:360] acquireMachinesLock for no-preload-257359: {Name:mk6d92dd7ed626ac67dff0eb9c6415617a7c299c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:02:50.799098  287962 start.go:364] duration metric: took 57.026µs to acquireMachinesLock for "no-preload-257359"
	I1206 10:02:50.799124  287962 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:02:50.799130  287962 fix.go:54] fixHost starting: 
	I1206 10:02:50.799434  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:50.817117  287962 fix.go:112] recreateIfNeeded on no-preload-257359: state=Stopped err=<nil>
	W1206 10:02:50.817159  287962 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:02:50.820604  287962 out.go:252] * Restarting existing docker container for "no-preload-257359" ...
	I1206 10:02:50.820691  287962 cli_runner.go:164] Run: docker start no-preload-257359
	I1206 10:02:51.082081  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:51.109151  287962 kic.go:430] container "no-preload-257359" state is running.
	I1206 10:02:51.111028  287962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 10:02:51.134579  287962 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/config.json ...
	I1206 10:02:51.135073  287962 machine.go:94] provisionDockerMachine start ...
	I1206 10:02:51.135154  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:51.160524  287962 main.go:143] libmachine: Using SSH client type: native
	I1206 10:02:51.161106  287962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 10:02:51.161128  287962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:02:51.161871  287962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:02:54.315394  287962 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-257359
	
	I1206 10:02:54.315419  287962 ubuntu.go:182] provisioning hostname "no-preload-257359"
	I1206 10:02:54.315482  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:54.335607  287962 main.go:143] libmachine: Using SSH client type: native
	I1206 10:02:54.335937  287962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 10:02:54.335955  287962 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-257359 && echo "no-preload-257359" | sudo tee /etc/hostname
	I1206 10:02:54.504049  287962 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-257359
	
	I1206 10:02:54.504125  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:54.526012  287962 main.go:143] libmachine: Using SSH client type: native
	I1206 10:02:54.526337  287962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1206 10:02:54.526359  287962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-257359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-257359/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-257359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:02:54.679778  287962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:02:54.679871  287962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:02:54.679899  287962 ubuntu.go:190] setting up certificates
	I1206 10:02:54.679930  287962 provision.go:84] configureAuth start
	I1206 10:02:54.680010  287962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 10:02:54.697376  287962 provision.go:143] copyHostCerts
	I1206 10:02:54.697458  287962 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:02:54.697469  287962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:02:54.697553  287962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:02:54.697662  287962 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:02:54.697668  287962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:02:54.697694  287962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:02:54.697758  287962 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:02:54.697763  287962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:02:54.697787  287962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:02:54.697840  287962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.no-preload-257359 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-257359]
	I1206 10:02:54.977047  287962 provision.go:177] copyRemoteCerts
	I1206 10:02:54.977148  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:02:54.977221  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:54.995583  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.103869  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:02:55.123476  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:02:55.143183  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:02:55.162544  287962 provision.go:87] duration metric: took 482.585221ms to configureAuth
	I1206 10:02:55.162615  287962 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:02:55.162829  287962 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:02:55.162844  287962 machine.go:97] duration metric: took 4.027747325s to provisionDockerMachine
	I1206 10:02:55.162853  287962 start.go:293] postStartSetup for "no-preload-257359" (driver="docker")
	I1206 10:02:55.162865  287962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:02:55.162921  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:02:55.162965  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.180527  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.287583  287962 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:02:55.291124  287962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:02:55.291151  287962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:02:55.291168  287962 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:02:55.291224  287962 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:02:55.291309  287962 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:02:55.291497  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:02:55.299238  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:02:55.317641  287962 start.go:296] duration metric: took 154.772967ms for postStartSetup
	I1206 10:02:55.317745  287962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:02:55.317837  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.335751  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.440465  287962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:02:55.445127  287962 fix.go:56] duration metric: took 4.645989389s for fixHost
	I1206 10:02:55.445154  287962 start.go:83] releasing machines lock for "no-preload-257359", held for 4.646041311s
	I1206 10:02:55.445251  287962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-257359
	I1206 10:02:55.462635  287962 ssh_runner.go:195] Run: cat /version.json
	I1206 10:02:55.462693  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.462962  287962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:02:55.463017  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:55.487975  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.493550  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:55.591110  287962 ssh_runner.go:195] Run: systemctl --version
	I1206 10:02:55.687501  287962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:02:55.693096  287962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:02:55.693233  287962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:02:55.701547  287962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:02:55.701573  287962 start.go:496] detecting cgroup driver to use...
	I1206 10:02:55.701604  287962 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:02:55.701653  287962 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:02:55.719594  287962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:02:55.734226  287962 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:02:55.734290  287962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:02:55.750404  287962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:02:55.764033  287962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:02:55.874437  287962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:02:56.003896  287962 docker.go:234] disabling docker service ...
	I1206 10:02:56.004020  287962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:02:56.022407  287962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:02:56.039132  287962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:02:56.150673  287962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:02:56.279968  287962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:02:56.293559  287962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:02:56.309015  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:02:56.320264  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:02:56.329394  287962 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:02:56.329501  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:02:56.338337  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:02:56.348542  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:02:56.357278  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:02:56.366102  287962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:02:56.374530  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:02:56.383495  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:02:56.392560  287962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:02:56.401292  287962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:02:56.408750  287962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:02:56.416046  287962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:02:56.521476  287962 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:02:56.624710  287962 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:02:56.624790  287962 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:02:56.628711  287962 start.go:564] Will wait 60s for crictl version
	I1206 10:02:56.628775  287962 ssh_runner.go:195] Run: which crictl
	I1206 10:02:56.632374  287962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:02:56.660663  287962 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:02:56.660734  287962 ssh_runner.go:195] Run: containerd --version
	I1206 10:02:56.680803  287962 ssh_runner.go:195] Run: containerd --version
	I1206 10:02:56.706136  287962 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:02:56.708890  287962 cli_runner.go:164] Run: docker network inspect no-preload-257359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:02:56.729633  287962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 10:02:56.733998  287962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:02:56.743903  287962 kubeadm.go:884] updating cluster {Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:02:56.744025  287962 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:02:56.744079  287962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:02:56.773425  287962 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:02:56.773444  287962 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:02:56.773451  287962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 10:02:56.773547  287962 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-257359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:02:56.773604  287962 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:02:56.801911  287962 cni.go:84] Creating CNI manager for ""
	I1206 10:02:56.801937  287962 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:02:56.801959  287962 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:02:56.801983  287962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-257359 NodeName:no-preload-257359 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:02:56.802107  287962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-257359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:02:56.802181  287962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:02:56.810040  287962 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:02:56.810160  287962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:02:56.817847  287962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:02:56.834027  287962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:02:56.847083  287962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 10:02:56.859664  287962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:02:56.863520  287962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:02:56.873266  287962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:02:56.982686  287962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:02:57.002169  287962 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359 for IP: 192.168.76.2
	I1206 10:02:57.002242  287962 certs.go:195] generating shared ca certs ...
	I1206 10:02:57.002272  287962 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.002542  287962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:02:57.002639  287962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:02:57.002674  287962 certs.go:257] generating profile certs ...
	I1206 10:02:57.002879  287962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/client.key
	I1206 10:02:57.003008  287962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key.673fc286
	I1206 10:02:57.003090  287962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key
	I1206 10:02:57.003263  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:02:57.003330  287962 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:02:57.003355  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:02:57.003487  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:02:57.003549  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:02:57.003611  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:02:57.003709  287962 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:02:57.004746  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:02:57.030862  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:02:57.051127  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:02:57.070625  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:02:57.091646  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:02:57.109996  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:02:57.128427  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:02:57.146680  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/no-preload-257359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:02:57.165617  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:02:57.183550  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:02:57.201664  287962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:02:57.220303  287962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:02:57.233337  287962 ssh_runner.go:195] Run: openssl version
	I1206 10:02:57.240029  287962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.247873  287962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:02:57.255843  287962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.259576  287962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.259660  287962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:02:57.301069  287962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:02:57.308859  287962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.316603  287962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:02:57.324324  287962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.328364  287962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.328429  287962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:02:57.371448  287962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:02:57.379279  287962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.386821  287962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:02:57.394739  287962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.398636  287962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.398746  287962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:02:57.439669  287962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:02:57.447527  287962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:02:57.451414  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:02:57.495635  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:02:57.538757  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:02:57.580199  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:02:57.621554  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:02:57.663093  287962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:02:57.704506  287962 kubeadm.go:401] StartCluster: {Name:no-preload-257359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-257359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:02:57.704612  287962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:02:57.704683  287962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:02:57.737766  287962 cri.go:89] found id: ""
	I1206 10:02:57.737905  287962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:02:57.747113  287962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:02:57.747187  287962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:02:57.747271  287962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:02:57.755581  287962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:02:57.756044  287962 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-257359" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:02:57.756207  287962 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-257359" cluster setting kubeconfig missing "no-preload-257359" context setting]
	I1206 10:02:57.756524  287962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.758045  287962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:02:57.767197  287962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1206 10:02:57.767270  287962 kubeadm.go:602] duration metric: took 20.064098ms to restartPrimaryControlPlane
	I1206 10:02:57.767298  287962 kubeadm.go:403] duration metric: took 62.801543ms to StartCluster
	I1206 10:02:57.767343  287962 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.767500  287962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:02:57.768125  287962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:02:57.768380  287962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:02:57.768778  287962 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:02:57.768818  287962 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:02:57.768907  287962 addons.go:70] Setting storage-provisioner=true in profile "no-preload-257359"
	I1206 10:02:57.768922  287962 addons.go:239] Setting addon storage-provisioner=true in "no-preload-257359"
	I1206 10:02:57.768948  287962 host.go:66] Checking if "no-preload-257359" exists ...
	I1206 10:02:57.769092  287962 addons.go:70] Setting dashboard=true in profile "no-preload-257359"
	I1206 10:02:57.769107  287962 addons.go:239] Setting addon dashboard=true in "no-preload-257359"
	W1206 10:02:57.769113  287962 addons.go:248] addon dashboard should already be in state true
	I1206 10:02:57.769132  287962 host.go:66] Checking if "no-preload-257359" exists ...
	I1206 10:02:57.769421  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.769598  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.771431  287962 addons.go:70] Setting default-storageclass=true in profile "no-preload-257359"
	I1206 10:02:57.771472  287962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-257359"
	I1206 10:02:57.771804  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.774271  287962 out.go:179] * Verifying Kubernetes components...
	I1206 10:02:57.777342  287962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:02:57.814184  287962 addons.go:239] Setting addon default-storageclass=true in "no-preload-257359"
	I1206 10:02:57.814227  287962 host.go:66] Checking if "no-preload-257359" exists ...
	I1206 10:02:57.814645  287962 cli_runner.go:164] Run: docker container inspect no-preload-257359 --format={{.State.Status}}
	I1206 10:02:57.822142  287962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 10:02:57.822210  287962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:02:57.824805  287962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:57.824833  287962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:02:57.824900  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:57.829489  287962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 10:02:57.833709  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 10:02:57.833737  287962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 10:02:57.833810  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:57.854012  287962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:02:57.854037  287962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:02:57.854112  287962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-257359
	I1206 10:02:57.856277  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:57.890754  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:57.895620  287962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/no-preload-257359/id_rsa Username:docker}
	I1206 10:02:58.001418  287962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:02:58.013906  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:58.039554  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:02:58.055658  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 10:02:58.055695  287962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 10:02:58.101580  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 10:02:58.101618  287962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 10:02:58.128504  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 10:02:58.128540  287962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 10:02:58.143820  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 10:02:58.143842  287962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 10:02:58.157352  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 10:02:58.157374  287962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 10:02:58.170340  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 10:02:58.170363  287962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 10:02:58.183841  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 10:02:58.183863  287962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 10:02:58.196825  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 10:02:58.196897  287962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 10:02:58.210321  287962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:02:58.210397  287962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 10:02:58.225210  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:02:58.721996  287962 node_ready.go:35] waiting up to 6m0s for node "no-preload-257359" to be "Ready" ...
	W1206 10:02:58.722385  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.722423  287962 retry.go:31] will retry after 208.185624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:58.722498  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.722524  287962 retry.go:31] will retry after 257.532203ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:58.722744  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.722763  287962 retry.go:31] will retry after 233.335704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:58.931351  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:58.956947  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:02:58.980534  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:02:59.025353  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.025390  287962 retry.go:31] will retry after 353.673401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:59.100456  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.100492  287962 retry.go:31] will retry after 331.036919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:59.107099  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.107140  287962 retry.go:31] will retry after 441.449257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.379273  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:02:59.432019  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:02:59.442471  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.442555  287962 retry.go:31] will retry after 796.609581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:02:59.506117  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.506155  287962 retry.go:31] will retry after 415.679971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.549272  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:02:59.613494  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.613567  287962 retry.go:31] will retry after 772.999564ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.922714  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:02:59.987770  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:02:59.987802  287962 retry.go:31] will retry after 559.230816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.240691  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:03:00.387605  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:00.455516  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.455602  287962 retry.go:31] will retry after 1.187622029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:00.463633  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.463667  287962 retry.go:31] will retry after 1.200867497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.547852  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:00.612093  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:00.612139  287962 retry.go:31] will retry after 893.435078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:00.722580  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:01.505896  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:01.574850  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.574887  287962 retry.go:31] will retry after 1.48070732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.644272  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:03:01.664837  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:01.713457  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.713495  287962 retry.go:31] will retry after 1.793608766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:01.741247  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:01.741282  287962 retry.go:31] will retry after 1.808351217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:02.723834  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:03.056692  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:03.120499  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.120617  287962 retry.go:31] will retry after 3.123226715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.507497  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:03:03.550077  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:03.603673  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.603716  287962 retry.go:31] will retry after 1.607269464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:03.627477  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:03.627509  287962 retry.go:31] will retry after 1.427548448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.055613  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:05.122568  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.122601  287962 retry.go:31] will retry after 4.264191427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.212016  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:05.222808  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:05.272035  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:05.272069  287962 retry.go:31] will retry after 4.227301864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:06.244562  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:06.309955  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:06.310033  287962 retry.go:31] will retry after 4.216626241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:07.223150  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:09.387517  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:09.457868  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:09.457900  287962 retry.go:31] will retry after 2.71431214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:09.499976  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:09.592059  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:09.592097  287962 retry.go:31] will retry after 2.312821913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:09.722871  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:10.527449  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:10.596453  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:10.596493  287962 retry.go:31] will retry after 5.508635395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:11.905982  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:11.973035  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:11.973068  287962 retry.go:31] will retry after 5.314130156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:12.173390  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:12.223182  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:12.232700  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:12.232730  287962 retry.go:31] will retry after 4.087053557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:14.722724  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:16.105932  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:16.170813  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:16.170847  287962 retry.go:31] will retry after 7.046098386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:16.320412  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:16.383512  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:16.383547  287962 retry.go:31] will retry after 7.362220175s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:16.723439  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:17.287932  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:17.349195  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:17.349229  287962 retry.go:31] will retry after 7.285529113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:19.223445  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:21.722607  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:23.217212  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:23.292880  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:23.292916  287962 retry.go:31] will retry after 20.839138696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:23.746772  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:23.837743  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:23.837784  287962 retry.go:31] will retry after 13.347463373s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:24.222666  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:24.635188  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:24.696400  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:24.696432  287962 retry.go:31] will retry after 15.254736641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:26.722523  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:28.722631  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:30.722708  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:33.222657  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:35.222704  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:37.186329  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:37.223320  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:37.292820  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:37.292848  287962 retry.go:31] will retry after 20.057827776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:39.722636  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:39.952067  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:03:40.017939  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:40.018752  287962 retry.go:31] will retry after 24.548199368s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:42.222608  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:44.132237  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:03:44.192642  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:44.192676  287962 retry.go:31] will retry after 19.029425314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:44.223357  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:46.722764  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:49.222520  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:51.222722  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:53.722670  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:03:56.222685  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:03:57.351089  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:03:57.410313  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:03:57.410344  287962 retry.go:31] will retry after 37.517817356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:03:58.223443  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:00.723057  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:03.222473  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:04:03.222747  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:03.285193  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:04:03.285231  287962 retry.go:31] will retry after 27.356198279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:04:04.567241  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:04:04.627406  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:04:04.627436  287962 retry.go:31] will retry after 26.195836442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:04:05.722912  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:08.222592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:10.223509  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:12.722603  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:15.222600  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:17.222760  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:19.723361  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:22.222650  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:24.222709  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:26.223343  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:28.722606  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:33.492075  278643 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000455959s
	I1206 10:04:33.497324  278643 kubeadm.go:319] 
	I1206 10:04:33.497409  278643 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:04:33.497452  278643 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:04:33.497564  278643 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:04:33.497573  278643 kubeadm.go:319] 
	I1206 10:04:33.497680  278643 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:04:33.497715  278643 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:04:33.497750  278643 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:04:33.497758  278643 kubeadm.go:319] 
	I1206 10:04:33.509281  278643 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:04:33.509716  278643 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:04:33.509836  278643 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:04:33.510075  278643 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:04:33.510082  278643 kubeadm.go:319] 
	I1206 10:04:33.510156  278643 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:04:33.510222  278643 kubeadm.go:403] duration metric: took 8m7.660801722s to StartCluster
	I1206 10:04:33.510279  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:04:33.510354  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:04:33.557741  278643 cri.go:89] found id: ""
	I1206 10:04:33.557779  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.557788  278643 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:04:33.557796  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:04:33.557870  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:04:33.590687  278643 cri.go:89] found id: ""
	I1206 10:04:33.590716  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.590766  278643 logs.go:284] No container was found matching "etcd"
	I1206 10:04:33.590773  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:04:33.590860  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:04:33.619661  278643 cri.go:89] found id: ""
	I1206 10:04:33.619702  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.619713  278643 logs.go:284] No container was found matching "coredns"
	I1206 10:04:33.619720  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:04:33.619795  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:04:33.645015  278643 cri.go:89] found id: ""
	I1206 10:04:33.645040  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.645050  278643 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:04:33.645056  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:04:33.645136  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:04:33.670104  278643 cri.go:89] found id: ""
	I1206 10:04:33.670173  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.670200  278643 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:04:33.670221  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:04:33.670299  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:04:33.695765  278643 cri.go:89] found id: ""
	I1206 10:04:33.695789  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.695798  278643 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:04:33.695805  278643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:04:33.695865  278643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:04:33.722778  278643 cri.go:89] found id: ""
	I1206 10:04:33.722855  278643 logs.go:282] 0 containers: []
	W1206 10:04:33.722877  278643 logs.go:284] No container was found matching "kindnet"
	I1206 10:04:33.722899  278643 logs.go:123] Gathering logs for kubelet ...
	I1206 10:04:33.722939  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:04:33.781701  278643 logs.go:123] Gathering logs for dmesg ...
	I1206 10:04:33.781737  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:04:33.795784  278643 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:04:33.795812  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:04:33.861564  278643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:04:33.852548    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.853295    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.854985    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.855733    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.857242    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:04:33.852548    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.853295    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.854985    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.855733    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:04:33.857242    4860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:04:33.861601  278643 logs.go:123] Gathering logs for containerd ...
	I1206 10:04:33.861614  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:04:33.901364  278643 logs.go:123] Gathering logs for container status ...
	I1206 10:04:33.901402  278643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:04:33.932222  278643 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:04:33.932285  278643 out.go:285] * 
	W1206 10:04:33.932340  278643 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:04:33.932357  278643 out.go:285] * 
	W1206 10:04:33.934488  278643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:04:33.941229  278643 out.go:203] 
	W1206 10:04:33.944295  278643 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000455959s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:04:33.944336  278643 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:04:33.944358  278643 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:04:33.949524  278643 out.go:203] 
	I1206 10:04:30.642530  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:04:30.708862  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:04:30.708972  287962 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 10:04:30.722850  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:30.824129  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:04:30.888593  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:04:30.888692  287962 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 10:04:32.735617  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:04:34.928756  287962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:04:35.053445  287962 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:04:35.053544  287962 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:04:35.056779  287962 out.go:179] * Enabled addons: 
	I1206 10:04:35.059761  287962 addons.go:530] duration metric: took 1m37.290947826s for enable addons: enabled=[]
	W1206 10:04:35.222513  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:37.222570  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:39.722630  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:42.222618  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:44.723604  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:47.222612  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:49.222935  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:51.722968  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:54.222557  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:56.222825  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:04:58.722671  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:00.723049  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:02.723435  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:05.222812  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:07.722576  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:09.722659  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:11.722705  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:13.723654  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:16.222504  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:18.222715  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:20.722949  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:22.723547  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:25.222533  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:27.222637  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:29.222722  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:31.722629  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:33.722688  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:35.722927  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:37.723623  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:40.223596  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:42.722591  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:45.223512  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:47.722519  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:49.722653  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:52.222599  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:54.722648  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:56.722892  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:05:58.723336  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:01.222648  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:03.722717  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:05.722812  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:08.222662  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:10.722866  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:13.222574  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:15.223640  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:17.722517  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:19.722607  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969320814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969390476Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969486092Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969600119Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969665285Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969726053Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969787649Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969847736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.969914559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.970006646Z" level=info msg="Connect containerd service"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.970348147Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.971072756Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.988408505Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.988883086Z" level=info msg="Start subscribing containerd event"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.989134961Z" level=info msg="Start recovering state"
	Dec 06 09:56:23 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:23.989034045Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030882208Z" level=info msg="Start event monitor"
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030937995Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030948448Z" level=info msg="Start streaming server"
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030960846Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030969552Z" level=info msg="runtime interface starting up..."
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030982828Z" level=info msg="starting plugins..."
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.030997007Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 09:56:24 newest-cni-387337 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 09:56:24 newest-cni-387337 containerd[759]: time="2025-12-06T09:56:24.033034408Z" level=info msg="containerd successfully booted in 0.087811s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:06:23.081348    6062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:06:23.081740    6062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:06:23.083264    6062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:06:23.084092    6062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:06:23.085752    6062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:06:23 up  1:48,  0 user,  load average: 0.45, 0.80, 1.63
	Linux newest-cni-387337 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:06:20 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:06:20 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 463.
	Dec 06 10:06:20 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:06:20 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:06:20 newest-cni-387337 kubelet[5939]: E1206 10:06:20.778592    5939 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:06:20 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:06:20 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:06:21 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 464.
	Dec 06 10:06:21 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:06:21 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:06:21 newest-cni-387337 kubelet[5945]: E1206 10:06:21.527674    5945 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:06:21 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:06:21 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:06:22 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 465.
	Dec 06 10:06:22 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:06:22 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:06:22 newest-cni-387337 kubelet[5963]: E1206 10:06:22.300623    5963 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:06:22 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:06:22 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:06:22 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 466.
	Dec 06 10:06:22 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:06:22 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:06:23 newest-cni-387337 kubelet[6055]: E1206 10:06:23.068200    6055 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:06:23 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:06:23 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337: exit status 6 (336.383047ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:06:23.580901  293435 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-387337" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-387337" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (107.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (373.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 10:06:36.061710    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:57.331162    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m8.515260536s)

                                                
                                                
-- stdout --
	* [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:06:25.195145  293728 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:06:25.195325  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195335  293728 out.go:374] Setting ErrFile to fd 2...
	I1206 10:06:25.195341  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195634  293728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:06:25.196028  293728 out.go:368] Setting JSON to false
	I1206 10:06:25.196926  293728 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6537,"bootTime":1765009049,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:06:25.196997  293728 start.go:143] virtualization:  
	I1206 10:06:25.199959  293728 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:06:25.203880  293728 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:06:25.204017  293728 notify.go:221] Checking for updates...
	I1206 10:06:25.210368  293728 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:06:25.213374  293728 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:25.216371  293728 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:06:25.221036  293728 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:06:25.223973  293728 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:06:25.227572  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:25.228243  293728 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:06:25.261513  293728 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:06:25.261626  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.340601  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.331029372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.340708  293728 docker.go:319] overlay module found
	I1206 10:06:25.343872  293728 out.go:179] * Using the docker driver based on existing profile
	I1206 10:06:25.346835  293728 start.go:309] selected driver: docker
	I1206 10:06:25.346867  293728 start.go:927] validating driver "docker" against &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.346969  293728 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:06:25.347911  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.407260  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.398348793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.407652  293728 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 10:06:25.407684  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:25.407750  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:25.407788  293728 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.410983  293728 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 10:06:25.413800  293728 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:06:25.416704  293728 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:06:25.419472  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:25.419517  293728 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:06:25.419530  293728 cache.go:65] Caching tarball of preloaded images
	I1206 10:06:25.419542  293728 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:06:25.419614  293728 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:06:25.419624  293728 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:06:25.419745  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.439065  293728 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:06:25.439097  293728 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:06:25.439117  293728 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:06:25.439151  293728 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:06:25.439218  293728 start.go:364] duration metric: took 44.948µs to acquireMachinesLock for "newest-cni-387337"
	I1206 10:06:25.439242  293728 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:06:25.439250  293728 fix.go:54] fixHost starting: 
	I1206 10:06:25.439553  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.455936  293728 fix.go:112] recreateIfNeeded on newest-cni-387337: state=Stopped err=<nil>
	W1206 10:06:25.455970  293728 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:06:25.459174  293728 out.go:252] * Restarting existing docker container for "newest-cni-387337" ...
	I1206 10:06:25.459260  293728 cli_runner.go:164] Run: docker start newest-cni-387337
	I1206 10:06:25.713574  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.738668  293728 kic.go:430] container "newest-cni-387337" state is running.
	I1206 10:06:25.739140  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:25.765706  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.766035  293728 machine.go:94] provisionDockerMachine start ...
	I1206 10:06:25.766147  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:25.787280  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:25.787973  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:25.787996  293728 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:06:25.789031  293728 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:06:28.943483  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:28.943510  293728 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 10:06:28.943583  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:28.962379  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:28.962708  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:28.962726  293728 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 10:06:29.136463  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:29.136552  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.155008  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:29.155343  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:29.155363  293728 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:06:29.311555  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:06:29.311646  293728 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:06:29.311703  293728 ubuntu.go:190] setting up certificates
	I1206 10:06:29.311733  293728 provision.go:84] configureAuth start
	I1206 10:06:29.311826  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.328361  293728 provision.go:143] copyHostCerts
	I1206 10:06:29.328435  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:06:29.328455  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:06:29.328532  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:06:29.328644  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:06:29.328655  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:06:29.328683  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:06:29.328754  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:06:29.328763  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:06:29.328788  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:06:29.328850  293728 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 10:06:29.477422  293728 provision.go:177] copyRemoteCerts
	I1206 10:06:29.477497  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:06:29.477551  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.495349  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.603554  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:06:29.622338  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:06:29.641011  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:06:29.660417  293728 provision.go:87] duration metric: took 348.656521ms to configureAuth
	I1206 10:06:29.660488  293728 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:06:29.660700  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:29.660714  293728 machine.go:97] duration metric: took 3.894659315s to provisionDockerMachine
	I1206 10:06:29.660722  293728 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 10:06:29.660734  293728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:06:29.660787  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:06:29.660840  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.679336  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.792654  293728 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:06:29.796414  293728 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:06:29.796451  293728 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:06:29.796481  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:06:29.796555  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:06:29.796637  293728 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:06:29.796752  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:06:29.804466  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:29.822913  293728 start.go:296] duration metric: took 162.176035ms for postStartSetup
	I1206 10:06:29.822993  293728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:06:29.823033  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.841962  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.944706  293728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:06:29.949621  293728 fix.go:56] duration metric: took 4.510364001s for fixHost
	I1206 10:06:29.949690  293728 start.go:83] releasing machines lock for "newest-cni-387337", held for 4.510458303s
	I1206 10:06:29.949801  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.966982  293728 ssh_runner.go:195] Run: cat /version.json
	I1206 10:06:29.967044  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.967315  293728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:06:29.967425  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.989346  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.995399  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:30.108934  293728 ssh_runner.go:195] Run: systemctl --version
	I1206 10:06:30.251570  293728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:06:30.256600  293728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:06:30.256686  293728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:06:30.265366  293728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:06:30.265436  293728 start.go:496] detecting cgroup driver to use...
	I1206 10:06:30.265475  293728 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:06:30.265547  293728 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:06:30.285393  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:06:30.300014  293728 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:06:30.300101  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:06:30.316388  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:06:30.330703  293728 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:06:30.447811  293728 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:06:30.578928  293728 docker.go:234] disabling docker service ...
	I1206 10:06:30.579012  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:06:30.595245  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:06:30.608936  293728 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:06:30.732584  293728 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:06:30.854426  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:06:30.867755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:06:30.882294  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:06:30.891997  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:06:30.901695  293728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:06:30.901766  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:06:30.911307  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.920864  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:06:30.930280  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.939955  293728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:06:30.948517  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:06:30.957894  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:06:30.967715  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:06:30.977793  293728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:06:30.985557  293728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:06:30.993239  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.114748  293728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:06:31.239476  293728 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:06:31.239597  293728 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:06:31.244664  293728 start.go:564] Will wait 60s for crictl version
	I1206 10:06:31.244770  293728 ssh_runner.go:195] Run: which crictl
	I1206 10:06:31.249231  293728 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:06:31.276528  293728 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:06:31.276637  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.298790  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.323558  293728 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:06:31.326534  293728 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:06:31.343556  293728 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 10:06:31.347752  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.361512  293728 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 10:06:31.364437  293728 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:06:31.364599  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:31.364692  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.390507  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.390542  293728 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:06:31.390602  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.417903  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.417928  293728 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:06:31.417937  293728 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 10:06:31.418044  293728 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:06:31.418117  293728 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:06:31.443849  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:31.443876  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:31.443900  293728 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 10:06:31.443924  293728 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:06:31.444044  293728 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:06:31.444118  293728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:06:31.452187  293728 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:06:31.452301  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:06:31.460150  293728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:06:31.473854  293728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:06:31.487946  293728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 10:06:31.501615  293728 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:06:31.505530  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.516062  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.633832  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:31.655929  293728 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 10:06:31.655955  293728 certs.go:195] generating shared ca certs ...
	I1206 10:06:31.655972  293728 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:31.656127  293728 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:06:31.656182  293728 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:06:31.656198  293728 certs.go:257] generating profile certs ...
	I1206 10:06:31.656306  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 10:06:31.656372  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 10:06:31.656419  293728 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 10:06:31.656536  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:06:31.656576  293728 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:06:31.656590  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:06:31.656620  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:06:31.656647  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:06:31.656675  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:06:31.656737  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:31.657407  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:06:31.678086  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:06:31.699851  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:06:31.722100  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:06:31.743193  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:06:31.762896  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:06:31.781616  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:06:31.801280  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:06:31.819401  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:06:31.838552  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:06:31.856936  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:06:31.875547  293728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:06:31.888930  293728 ssh_runner.go:195] Run: openssl version
	I1206 10:06:31.895342  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.903529  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:06:31.911304  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915287  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915352  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.961696  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:06:31.970315  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.981710  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:06:31.992227  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996668  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996744  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:06:32.043296  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:06:32.051139  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.058979  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:06:32.066993  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071120  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071217  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.113955  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:06:32.121998  293728 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:06:32.126168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:06:32.167933  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:06:32.209594  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:06:32.252826  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:06:32.295168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:06:32.336384  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:06:32.377923  293728 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:32.378019  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:06:32.378107  293728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:06:32.406152  293728 cri.go:89] found id: ""
	I1206 10:06:32.406224  293728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:06:32.414373  293728 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:06:32.414394  293728 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:06:32.414444  293728 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:06:32.422214  293728 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:06:32.422855  293728 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-387337" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.423179  293728 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-387337" cluster setting kubeconfig missing "newest-cni-387337" context setting]
	I1206 10:06:32.423737  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.425135  293728 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:06:32.433653  293728 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 10:06:32.433689  293728 kubeadm.go:602] duration metric: took 19.289872ms to restartPrimaryControlPlane
	I1206 10:06:32.433699  293728 kubeadm.go:403] duration metric: took 55.791147ms to StartCluster
	I1206 10:06:32.433714  293728 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.433786  293728 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.434769  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.434995  293728 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:06:32.435318  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:32.435370  293728 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:06:32.435471  293728 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-387337"
	I1206 10:06:32.435485  293728 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-387337"
	I1206 10:06:32.435510  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435575  293728 addons.go:70] Setting dashboard=true in profile "newest-cni-387337"
	I1206 10:06:32.435608  293728 addons.go:239] Setting addon dashboard=true in "newest-cni-387337"
	W1206 10:06:32.435630  293728 addons.go:248] addon dashboard should already be in state true
	I1206 10:06:32.435689  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435986  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436310  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436715  293728 addons.go:70] Setting default-storageclass=true in profile "newest-cni-387337"
	I1206 10:06:32.436742  293728 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-387337"
	I1206 10:06:32.437054  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.440794  293728 out.go:179] * Verifying Kubernetes components...
	I1206 10:06:32.443631  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:32.498221  293728 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 10:06:32.501060  293728 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 10:06:32.503631  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 10:06:32.503654  293728 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 10:06:32.503744  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.508648  293728 addons.go:239] Setting addon default-storageclass=true in "newest-cni-387337"
	I1206 10:06:32.508690  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.509493  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.523049  293728 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:06:32.526921  293728 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.526947  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:06:32.527022  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.570818  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.571691  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.595638  293728 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.595658  293728 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:06:32.595716  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.624247  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.694342  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:32.746370  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 10:06:32.746390  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 10:06:32.765644  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.786998  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 10:06:32.787020  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 10:06:32.804870  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.820938  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 10:06:32.821012  293728 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 10:06:32.877095  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 10:06:32.877165  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 10:06:32.903565  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 10:06:32.903593  293728 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 10:06:32.916625  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 10:06:32.916699  293728 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 10:06:32.930049  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 10:06:32.930072  293728 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 10:06:32.943222  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 10:06:32.943248  293728 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 10:06:32.958124  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:32.958148  293728 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 10:06:32.971454  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:33.482958  293728 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:06:33.483036  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:33.483155  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483183  293728 retry.go:31] will retry after 318.519734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483231  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483244  293728 retry.go:31] will retry after 239.813026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483501  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483518  293728 retry.go:31] will retry after 128.431008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.612510  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:33.679631  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.679670  293728 retry.go:31] will retry after 494.781452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.723639  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:33.790368  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.790401  293728 retry.go:31] will retry after 373.145908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.802573  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:33.864526  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.864571  293728 retry.go:31] will retry after 555.783365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.983818  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.164188  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:34.174768  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:34.315072  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.315120  293728 retry.go:31] will retry after 679.653646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:34.319455  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.319548  293728 retry.go:31] will retry after 695.531102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.421513  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:34.483690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:34.487662  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.487697  293728 retry.go:31] will retry after 692.225187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.983561  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.995819  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:35.016010  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:35.122122  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.122225  293728 retry.go:31] will retry after 1.142566381s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.138887  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.138925  293728 retry.go:31] will retry after 649.678663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.180839  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:35.247363  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.247415  293728 retry.go:31] will retry after 580.881907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.483771  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:35.788736  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:35.829213  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:35.856520  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.856598  293728 retry.go:31] will retry after 1.553154314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.896812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.896844  293728 retry.go:31] will retry after 933.683215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.984035  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.265085  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:36.326884  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.326918  293728 retry.go:31] will retry after 708.086155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.484141  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.831542  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:36.897118  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.897156  293728 retry.go:31] will retry after 1.33074055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.983504  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.035538  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:37.096009  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.096042  293728 retry.go:31] will retry after 1.790090237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.410554  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:37.480541  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.480578  293728 retry.go:31] will retry after 966.279559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.483641  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.984118  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:38.228242  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:38.293907  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.293942  293728 retry.go:31] will retry after 2.616205885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.447170  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:38.483864  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:38.514147  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.514181  293728 retry.go:31] will retry after 2.714109668s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.886857  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:38.951997  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.952029  293728 retry.go:31] will retry after 2.462359856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.983614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.483264  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.983242  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:40.483248  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:40.910479  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:40.983819  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:40.985785  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:40.985821  293728 retry.go:31] will retry after 2.652074408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.229298  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:41.298980  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.299018  293728 retry.go:31] will retry after 3.795353676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.415143  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:41.478696  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.478758  293728 retry.go:31] will retry after 5.28721939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.483845  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:41.983945  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.483250  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.483241  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.638309  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:43.697835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.697874  293728 retry.go:31] will retry after 4.887793633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.983195  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.483546  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.983775  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.095370  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:45.192562  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:45.192602  293728 retry.go:31] will retry after 8.015655906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:45.483497  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.984044  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.483220  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.766179  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:46.829923  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.829956  293728 retry.go:31] will retry after 4.667102636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.984011  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.483312  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.984058  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.586389  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:48.650814  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.650848  293728 retry.go:31] will retry after 13.339615646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.983299  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.483453  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.983414  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:50.483943  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:50.983588  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.497329  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:51.584226  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.584262  293728 retry.go:31] will retry after 10.765270657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.983783  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.484023  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.983169  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.208585  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:53.275063  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.275124  293728 retry.go:31] will retry after 12.265040886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.983886  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.483520  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.983246  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:55.484066  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:55.983753  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.483532  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.983522  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.483514  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.983263  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.483994  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.983173  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.483759  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.983187  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:00.483755  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:00.984174  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.983995  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.991432  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:02.091463  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.091500  293728 retry.go:31] will retry after 13.890333948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.349878  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:02.411835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.411870  293728 retry.go:31] will retry after 7.977295138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.483150  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:02.983902  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.483778  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.983278  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.483894  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.983934  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:05.483794  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:05.540834  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:05.606800  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.606832  293728 retry.go:31] will retry after 11.29369971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.983418  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.983887  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.483439  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.984054  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.483236  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.983521  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.483231  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:10.390061  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:10.460795  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.460828  293728 retry.go:31] will retry after 24.523063216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.483989  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:10.983508  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.483968  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.983921  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.983503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.483736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.983533  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.483788  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.983198  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:15.483180  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:15.982114  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:07:15.983591  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:16.054278  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.054318  293728 retry.go:31] will retry after 20.338606766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.484114  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:16.901533  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:07:16.984157  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:17.001960  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.001998  293728 retry.go:31] will retry after 24.827417164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.483261  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:17.983420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.983281  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.483741  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.983176  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:20.483695  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:20.983984  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.483862  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.483812  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.983632  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.483796  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.984175  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:25.483633  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:25.984006  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.483830  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.983203  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.483211  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.983237  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.484156  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.983736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.483880  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.984116  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:30.483549  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:30.983243  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.483786  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.983608  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:32.483844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:32.483952  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:32.508469  293728 cri.go:89] found id: ""
	I1206 10:07:32.508497  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.508505  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:32.508512  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:32.508574  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:32.533265  293728 cri.go:89] found id: ""
	I1206 10:07:32.533288  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.533297  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:32.533303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:32.533364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:32.562655  293728 cri.go:89] found id: ""
	I1206 10:07:32.562686  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.562695  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:32.562702  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:32.562769  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:32.587755  293728 cri.go:89] found id: ""
	I1206 10:07:32.587781  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.587789  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:32.587796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:32.587855  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:32.613253  293728 cri.go:89] found id: ""
	I1206 10:07:32.613284  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.613292  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:32.613305  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:32.613364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:32.638621  293728 cri.go:89] found id: ""
	I1206 10:07:32.638648  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.638656  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:32.638662  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:32.638775  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:32.663624  293728 cri.go:89] found id: ""
	I1206 10:07:32.663649  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.663657  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:32.663664  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:32.663724  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:32.687850  293728 cri.go:89] found id: ""
	I1206 10:07:32.687872  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.687881  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:32.687890  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:32.687901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:32.763755  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:32.763831  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:32.788174  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:32.788242  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:32.866103  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:32.866126  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:32.866138  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:32.891711  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:32.891745  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:34.985041  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:35.094954  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:35.094988  293728 retry.go:31] will retry after 34.21540436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:35.421586  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:35.432096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:35.432164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:35.457419  293728 cri.go:89] found id: ""
	I1206 10:07:35.457442  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.457451  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:35.457457  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:35.457520  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:35.481490  293728 cri.go:89] found id: ""
	I1206 10:07:35.481513  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.481521  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:35.481527  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:35.481586  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:35.506409  293728 cri.go:89] found id: ""
	I1206 10:07:35.506432  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.506441  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:35.506447  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:35.506512  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:35.534896  293728 cri.go:89] found id: ""
	I1206 10:07:35.534923  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.534932  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:35.534939  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:35.534997  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:35.560020  293728 cri.go:89] found id: ""
	I1206 10:07:35.560043  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.560052  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:35.560058  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:35.560115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:35.584963  293728 cri.go:89] found id: ""
	I1206 10:07:35.585028  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.585042  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:35.585049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:35.585110  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:35.617464  293728 cri.go:89] found id: ""
	I1206 10:07:35.617487  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.617495  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:35.617501  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:35.617562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:35.642187  293728 cri.go:89] found id: ""
	I1206 10:07:35.642219  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.642228  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:35.642238  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:35.642250  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:35.655709  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:35.655738  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:35.728266  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:35.728336  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:35.728379  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:35.766222  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:35.766301  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:35.823000  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:35.823024  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:36.393185  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:36.458951  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:36.458990  293728 retry.go:31] will retry after 24.220809087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:38.379270  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:38.389923  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:38.389993  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:38.416450  293728 cri.go:89] found id: ""
	I1206 10:07:38.416517  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.416540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:38.416558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:38.416635  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:38.442635  293728 cri.go:89] found id: ""
	I1206 10:07:38.442663  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.442672  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:38.442680  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:38.442742  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:38.469797  293728 cri.go:89] found id: ""
	I1206 10:07:38.469824  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.469834  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:38.469840  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:38.469899  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:38.497073  293728 cri.go:89] found id: ""
	I1206 10:07:38.497098  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.497107  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:38.497113  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:38.497194  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:38.527432  293728 cri.go:89] found id: ""
	I1206 10:07:38.527465  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.527474  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:38.527481  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:38.527540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:38.554253  293728 cri.go:89] found id: ""
	I1206 10:07:38.554278  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.554290  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:38.554300  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:38.554368  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:38.580022  293728 cri.go:89] found id: ""
	I1206 10:07:38.580070  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.580080  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:38.580087  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:38.580165  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:38.604967  293728 cri.go:89] found id: ""
	I1206 10:07:38.604992  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.605001  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:38.605010  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:38.605041  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:38.672012  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:38.672044  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:38.672075  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:38.697533  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:38.697567  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:38.750151  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:38.750176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:38.835463  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:38.835500  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:41.350690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:41.361865  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:41.361934  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:41.387755  293728 cri.go:89] found id: ""
	I1206 10:07:41.387781  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.387789  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:41.387796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:41.387854  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:41.412482  293728 cri.go:89] found id: ""
	I1206 10:07:41.412510  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.412519  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:41.412526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:41.412591  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:41.437604  293728 cri.go:89] found id: ""
	I1206 10:07:41.437635  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.437644  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:41.437650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:41.437722  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:41.462503  293728 cri.go:89] found id: ""
	I1206 10:07:41.462573  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.462597  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:41.462616  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:41.462703  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:41.487720  293728 cri.go:89] found id: ""
	I1206 10:07:41.487742  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.487750  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:41.487757  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:41.487819  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:41.513291  293728 cri.go:89] found id: ""
	I1206 10:07:41.513321  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.513332  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:41.513342  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:41.513420  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:41.547109  293728 cri.go:89] found id: ""
	I1206 10:07:41.547132  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.547141  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:41.547147  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:41.547209  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:41.572514  293728 cri.go:89] found id: ""
	I1206 10:07:41.572585  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.572607  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:41.572628  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:41.572669  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:41.629345  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:41.629378  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:41.643897  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:41.643928  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:41.713946  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:41.714006  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:41.714025  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:41.745589  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:41.745645  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:41.830134  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:41.893553  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:41.893593  293728 retry.go:31] will retry after 44.351115962s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:44.324517  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:44.335432  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:44.335507  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:44.365594  293728 cri.go:89] found id: ""
	I1206 10:07:44.365621  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.365630  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:44.365637  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:44.365723  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:44.390876  293728 cri.go:89] found id: ""
	I1206 10:07:44.390909  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.390919  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:44.390944  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:44.391026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:44.421424  293728 cri.go:89] found id: ""
	I1206 10:07:44.421448  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.421462  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:44.421468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:44.421525  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:44.445299  293728 cri.go:89] found id: ""
	I1206 10:07:44.445325  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.445335  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:44.445341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:44.445454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:44.473977  293728 cri.go:89] found id: ""
	I1206 10:07:44.473999  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.474008  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:44.474014  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:44.474072  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:44.501273  293728 cri.go:89] found id: ""
	I1206 10:07:44.501299  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.501308  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:44.501341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:44.501415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:44.525106  293728 cri.go:89] found id: ""
	I1206 10:07:44.525136  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.525154  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:44.525161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:44.525223  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:44.550546  293728 cri.go:89] found id: ""
	I1206 10:07:44.550571  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.550580  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:44.550589  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:44.550600  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:44.615941  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:44.615962  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:44.615975  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:44.641346  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:44.641377  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:44.669493  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:44.669520  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:44.727196  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:44.727357  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:47.260652  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:47.271164  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:47.271238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:47.295481  293728 cri.go:89] found id: ""
	I1206 10:07:47.295506  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.295515  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:47.295521  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:47.295581  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:47.321861  293728 cri.go:89] found id: ""
	I1206 10:07:47.321884  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.321892  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:47.321898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:47.321954  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:47.346071  293728 cri.go:89] found id: ""
	I1206 10:07:47.346094  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.346103  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:47.346110  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:47.346169  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:47.373210  293728 cri.go:89] found id: ""
	I1206 10:07:47.373234  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.373242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:47.373249  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:47.373312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:47.403706  293728 cri.go:89] found id: ""
	I1206 10:07:47.403729  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.403739  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:47.403745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:47.403810  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:47.433807  293728 cri.go:89] found id: ""
	I1206 10:07:47.433831  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.433840  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:47.433847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:47.433904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:47.462210  293728 cri.go:89] found id: ""
	I1206 10:07:47.462233  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.462241  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:47.462247  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:47.462308  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:47.486445  293728 cri.go:89] found id: ""
	I1206 10:07:47.486523  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.486546  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:47.486567  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:47.486597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:47.500083  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:47.500114  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:47.568637  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:47.568661  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:47.568683  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:47.598178  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:47.598213  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:47.629224  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:47.629249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.187574  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:50.198529  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:50.198609  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:50.224708  293728 cri.go:89] found id: ""
	I1206 10:07:50.224731  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.224738  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:50.224744  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:50.224806  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:50.253337  293728 cri.go:89] found id: ""
	I1206 10:07:50.253361  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.253370  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:50.253376  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:50.253433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:50.278723  293728 cri.go:89] found id: ""
	I1206 10:07:50.278750  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.278759  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:50.278766  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:50.278830  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:50.308736  293728 cri.go:89] found id: ""
	I1206 10:07:50.308803  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.308822  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:50.308834  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:50.308894  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:50.333136  293728 cri.go:89] found id: ""
	I1206 10:07:50.333162  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.333171  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:50.333177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:50.333263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:50.358071  293728 cri.go:89] found id: ""
	I1206 10:07:50.358105  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.358114  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:50.358137  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:50.358215  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:50.382078  293728 cri.go:89] found id: ""
	I1206 10:07:50.382111  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.382120  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:50.382141  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:50.382222  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:50.407225  293728 cri.go:89] found id: ""
	I1206 10:07:50.407261  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.407270  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:50.407279  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:50.407291  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.466553  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:50.466588  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:50.480420  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:50.480450  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:50.546503  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:50.546523  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:50.546546  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:50.573208  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:50.573243  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.100604  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:53.111611  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:53.111683  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:53.136465  293728 cri.go:89] found id: ""
	I1206 10:07:53.136494  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.136503  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:53.136510  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:53.136584  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:53.167397  293728 cri.go:89] found id: ""
	I1206 10:07:53.167419  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.167427  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:53.167433  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:53.167501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:53.191735  293728 cri.go:89] found id: ""
	I1206 10:07:53.191769  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.191778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:53.191784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:53.191849  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:53.216472  293728 cri.go:89] found id: ""
	I1206 10:07:53.216495  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.216506  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:53.216513  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:53.216570  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:53.242936  293728 cri.go:89] found id: ""
	I1206 10:07:53.242957  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.242966  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:53.242972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:53.243035  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:53.274015  293728 cri.go:89] found id: ""
	I1206 10:07:53.274041  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.274050  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:53.274056  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:53.274118  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:53.303348  293728 cri.go:89] found id: ""
	I1206 10:07:53.303371  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.303415  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:53.303422  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:53.303486  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:53.332691  293728 cri.go:89] found id: ""
	I1206 10:07:53.332716  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.332724  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:53.332733  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:53.332749  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:53.346274  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:53.346303  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:53.412178  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:53.412203  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:53.412216  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:53.437974  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:53.438008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.469789  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:53.469816  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:56.029614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:56.044312  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:56.044385  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:56.074035  293728 cri.go:89] found id: ""
	I1206 10:07:56.074061  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.074071  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:56.074077  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:56.074137  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:56.101362  293728 cri.go:89] found id: ""
	I1206 10:07:56.101387  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.101397  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:56.101403  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:56.101472  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:56.132837  293728 cri.go:89] found id: ""
	I1206 10:07:56.132867  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.132876  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:56.132882  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:56.132949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:56.162095  293728 cri.go:89] found id: ""
	I1206 10:07:56.162121  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.162129  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:56.162136  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:56.162195  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:56.190088  293728 cri.go:89] found id: ""
	I1206 10:07:56.190113  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.190122  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:56.190128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:56.190188  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:56.217327  293728 cri.go:89] found id: ""
	I1206 10:07:56.217355  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.217365  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:56.217372  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:56.217432  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:56.242210  293728 cri.go:89] found id: ""
	I1206 10:07:56.242246  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.242255  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:56.242261  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:56.242330  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:56.266843  293728 cri.go:89] found id: ""
	I1206 10:07:56.266871  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.266879  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:56.266888  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:56.266900  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:56.324906  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:56.324941  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:56.339074  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:56.339111  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:56.407395  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:56.407417  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:56.407434  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:56.433408  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:56.433442  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:58.962420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:58.984606  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:58.984688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:59.037604  293728 cri.go:89] found id: ""
	I1206 10:07:59.037795  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.038054  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:59.038096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:59.038236  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:59.074512  293728 cri.go:89] found id: ""
	I1206 10:07:59.074555  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.074564  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:59.074571  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:59.074638  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:59.101868  293728 cri.go:89] found id: ""
	I1206 10:07:59.101895  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.101904  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:59.101910  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:59.101973  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:59.127188  293728 cri.go:89] found id: ""
	I1206 10:07:59.127214  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.127223  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:59.127230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:59.127286  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:59.152234  293728 cri.go:89] found id: ""
	I1206 10:07:59.152259  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.152268  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:59.152274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:59.152342  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:59.177629  293728 cri.go:89] found id: ""
	I1206 10:07:59.177654  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.177663  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:59.177670  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:59.177728  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:59.202156  293728 cri.go:89] found id: ""
	I1206 10:07:59.202185  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.202195  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:59.202201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:59.202261  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:59.227130  293728 cri.go:89] found id: ""
	I1206 10:07:59.227165  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.227174  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:59.227183  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:59.227204  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:59.241522  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:59.241597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:59.311704  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:59.311730  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:59.311742  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:59.337213  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:59.337246  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:59.365911  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:59.365940  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:00.680788  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:08:00.745958  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:00.746077  293728 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:01.925540  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:01.936468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:01.936592  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:01.965164  293728 cri.go:89] found id: ""
	I1206 10:08:01.965242  293728 logs.go:282] 0 containers: []
	W1206 10:08:01.965277  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:01.965302  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:01.965393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:02.013736  293728 cri.go:89] found id: ""
	I1206 10:08:02.013774  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.013783  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:02.013790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:02.013862  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:02.058535  293728 cri.go:89] found id: ""
	I1206 10:08:02.058627  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.058651  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:02.058685  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:02.058798  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:02.091149  293728 cri.go:89] found id: ""
	I1206 10:08:02.091213  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.091242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:02.091286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:02.091460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:02.116844  293728 cri.go:89] found id: ""
	I1206 10:08:02.116870  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.116878  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:02.116884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:02.116945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:02.143338  293728 cri.go:89] found id: ""
	I1206 10:08:02.143439  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.143463  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:02.143485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:02.143573  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:02.169310  293728 cri.go:89] found id: ""
	I1206 10:08:02.169333  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.169342  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:02.169348  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:02.169410  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:02.200025  293728 cri.go:89] found id: ""
	I1206 10:08:02.200096  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.200104  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:02.200113  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:02.200125  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:02.257304  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:02.257340  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:02.271507  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:02.271541  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:02.341058  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:02.341084  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:02.341097  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:02.367636  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:02.367672  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:04.899503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:04.910154  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:04.910231  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:04.934598  293728 cri.go:89] found id: ""
	I1206 10:08:04.934623  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.934632  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:04.934638  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:04.934699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:04.959971  293728 cri.go:89] found id: ""
	I1206 10:08:04.959995  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.960004  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:04.960010  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:04.960071  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:05.027645  293728 cri.go:89] found id: ""
	I1206 10:08:05.027668  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.027677  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:05.027683  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:05.027758  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:05.077828  293728 cri.go:89] found id: ""
	I1206 10:08:05.077868  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.077878  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:05.077884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:05.077946  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:05.103986  293728 cri.go:89] found id: ""
	I1206 10:08:05.104014  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.104023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:05.104029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:05.104091  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:05.129703  293728 cri.go:89] found id: ""
	I1206 10:08:05.129778  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.129822  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:05.129843  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:05.129930  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:05.156958  293728 cri.go:89] found id: ""
	I1206 10:08:05.156982  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.156990  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:05.156996  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:05.157058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:05.182537  293728 cri.go:89] found id: ""
	I1206 10:08:05.182565  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.182575  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:05.182585  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:05.182598  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:05.196389  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:05.196419  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:05.262239  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:05.262265  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:05.262278  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:05.288138  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:05.288178  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:05.316468  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:05.316497  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:07.872986  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:07.886594  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:07.886666  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:07.912554  293728 cri.go:89] found id: ""
	I1206 10:08:07.912580  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.912589  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:07.912595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:07.912668  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:07.938006  293728 cri.go:89] found id: ""
	I1206 10:08:07.938033  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.938042  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:07.938049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:07.938107  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:07.967969  293728 cri.go:89] found id: ""
	I1206 10:08:07.967995  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.968004  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:07.968011  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:07.968079  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:08.001472  293728 cri.go:89] found id: ""
	I1206 10:08:08.001495  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.001504  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:08.001511  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:08.001577  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:08.064509  293728 cri.go:89] found id: ""
	I1206 10:08:08.064538  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.064547  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:08.064554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:08.064612  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:08.094308  293728 cri.go:89] found id: ""
	I1206 10:08:08.094376  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.094402  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:08.094434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:08.094522  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:08.124650  293728 cri.go:89] found id: ""
	I1206 10:08:08.124695  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.124705  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:08.124712  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:08.124782  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:08.150816  293728 cri.go:89] found id: ""
	I1206 10:08:08.150851  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.150860  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:08.150868  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:08.150879  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:08.207170  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:08.207203  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:08.220834  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:08.220860  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:08.285113  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:08.285138  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:08.285153  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:08.311342  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:08.311548  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:09.310714  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:08:09.371609  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:09.371709  293728 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:10.840228  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:10.850847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:10.850914  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:10.881439  293728 cri.go:89] found id: ""
	I1206 10:08:10.881517  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.881540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:10.881555  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:10.881629  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:10.910942  293728 cri.go:89] found id: ""
	I1206 10:08:10.910971  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.910980  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:10.910987  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:10.911049  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:10.936471  293728 cri.go:89] found id: ""
	I1206 10:08:10.936495  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.936503  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:10.936509  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:10.936566  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:10.964540  293728 cri.go:89] found id: ""
	I1206 10:08:10.964567  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.964575  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:10.964581  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:10.964650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:11.035295  293728 cri.go:89] found id: ""
	I1206 10:08:11.035322  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.035332  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:11.035354  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:11.035433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:11.081240  293728 cri.go:89] found id: ""
	I1206 10:08:11.081266  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.081275  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:11.081282  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:11.081347  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:11.109502  293728 cri.go:89] found id: ""
	I1206 10:08:11.109543  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.109554  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:11.109561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:11.109625  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:11.138072  293728 cri.go:89] found id: ""
	I1206 10:08:11.138100  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.138113  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:11.138122  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:11.138134  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:11.207996  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:11.208060  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:11.208081  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:11.234490  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:11.234525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:11.263495  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:11.263525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:11.323991  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:11.324034  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:13.838014  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:13.849112  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:13.849181  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:13.873403  293728 cri.go:89] found id: ""
	I1206 10:08:13.873472  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.873498  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:13.873515  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:13.873602  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:13.900596  293728 cri.go:89] found id: ""
	I1206 10:08:13.900616  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.900625  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:13.900631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:13.900694  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:13.925385  293728 cri.go:89] found id: ""
	I1206 10:08:13.925409  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.925417  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:13.925424  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:13.925481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:13.950796  293728 cri.go:89] found id: ""
	I1206 10:08:13.950823  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.950837  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:13.950844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:13.950902  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:14.028934  293728 cri.go:89] found id: ""
	I1206 10:08:14.028964  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.028973  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:14.028979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:14.029058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:14.063925  293728 cri.go:89] found id: ""
	I1206 10:08:14.063948  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.063957  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:14.063963  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:14.064024  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:14.091439  293728 cri.go:89] found id: ""
	I1206 10:08:14.091465  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.091473  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:14.091480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:14.091556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:14.116453  293728 cri.go:89] found id: ""
	I1206 10:08:14.116476  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.116485  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:14.116494  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:14.116506  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:14.173576  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:14.173615  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:14.187707  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:14.187736  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:14.256417  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:14.256440  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:14.256452  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:14.281458  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:14.281490  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:16.809300  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:16.820406  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:16.820481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:16.845040  293728 cri.go:89] found id: ""
	I1206 10:08:16.845105  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.845130  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:16.845144  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:16.845217  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:16.875450  293728 cri.go:89] found id: ""
	I1206 10:08:16.875475  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.875484  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:16.875500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:16.875562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:16.902002  293728 cri.go:89] found id: ""
	I1206 10:08:16.902048  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.902059  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:16.902068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:16.902146  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:16.927319  293728 cri.go:89] found id: ""
	I1206 10:08:16.927353  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.927361  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:16.927368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:16.927466  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:16.952239  293728 cri.go:89] found id: ""
	I1206 10:08:16.952265  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.952273  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:16.952280  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:16.952386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:16.994322  293728 cri.go:89] found id: ""
	I1206 10:08:16.994351  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.994360  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:16.994368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:16.994437  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:17.032079  293728 cri.go:89] found id: ""
	I1206 10:08:17.032113  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.032122  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:17.032128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:17.032201  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:17.079256  293728 cri.go:89] found id: ""
	I1206 10:08:17.079321  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.079343  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:17.079364  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:17.079406  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:17.104677  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:17.104707  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:17.136676  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:17.136701  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:17.195915  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:17.195950  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:17.209626  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:17.209653  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:17.278745  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:19.780767  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:19.791658  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:19.791756  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:19.820516  293728 cri.go:89] found id: ""
	I1206 10:08:19.820539  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.820547  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:19.820554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:19.820652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:19.845473  293728 cri.go:89] found id: ""
	I1206 10:08:19.845499  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.845507  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:19.845514  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:19.845572  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:19.871555  293728 cri.go:89] found id: ""
	I1206 10:08:19.871580  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.871592  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:19.871598  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:19.871658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:19.902754  293728 cri.go:89] found id: ""
	I1206 10:08:19.902778  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.902787  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:19.902793  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:19.902853  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:19.927447  293728 cri.go:89] found id: ""
	I1206 10:08:19.927473  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.927482  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:19.927489  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:19.927549  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:19.951607  293728 cri.go:89] found id: ""
	I1206 10:08:19.951634  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.951644  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:19.951651  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:19.951718  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:20.023839  293728 cri.go:89] found id: ""
	I1206 10:08:20.023868  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.023879  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:20.023886  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:20.023951  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:20.064702  293728 cri.go:89] found id: ""
	I1206 10:08:20.064730  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.064739  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:20.064748  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:20.064761  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:20.131531  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:20.131555  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:20.131566  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:20.157955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:20.157991  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:20.188100  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:20.188126  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:20.248399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:20.248437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:22.762476  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:22.774338  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:22.774408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:22.803197  293728 cri.go:89] found id: ""
	I1206 10:08:22.803220  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.803228  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:22.803234  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:22.803292  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:22.828985  293728 cri.go:89] found id: ""
	I1206 10:08:22.829009  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.829018  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:22.829024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:22.829084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:22.857670  293728 cri.go:89] found id: ""
	I1206 10:08:22.857695  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.857704  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:22.857710  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:22.857770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:22.886863  293728 cri.go:89] found id: ""
	I1206 10:08:22.886889  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.886898  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:22.886905  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:22.886967  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:22.912046  293728 cri.go:89] found id: ""
	I1206 10:08:22.912072  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.912080  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:22.912086  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:22.912149  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:22.940438  293728 cri.go:89] found id: ""
	I1206 10:08:22.940516  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.940530  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:22.940538  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:22.940597  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:22.965932  293728 cri.go:89] found id: ""
	I1206 10:08:22.965957  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.965966  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:22.965973  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:22.966034  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:23.036167  293728 cri.go:89] found id: ""
	I1206 10:08:23.036194  293728 logs.go:282] 0 containers: []
	W1206 10:08:23.036203  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:23.036212  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:23.036224  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:23.054454  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:23.054481  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:23.120660  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:23.120680  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:23.120692  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:23.146879  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:23.146913  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:23.177356  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:23.177389  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:25.739842  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:25.751155  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:25.751238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:25.781790  293728 cri.go:89] found id: ""
	I1206 10:08:25.781813  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.781821  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:25.781828  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:25.781884  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:25.809915  293728 cri.go:89] found id: ""
	I1206 10:08:25.809940  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.809948  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:25.809954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:25.810014  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:25.840293  293728 cri.go:89] found id: ""
	I1206 10:08:25.840318  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.840327  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:25.840334  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:25.840390  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:25.869368  293728 cri.go:89] found id: ""
	I1206 10:08:25.869401  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.869410  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:25.869416  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:25.869488  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:25.898302  293728 cri.go:89] found id: ""
	I1206 10:08:25.898335  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.898344  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:25.898351  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:25.898417  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:25.925837  293728 cri.go:89] found id: ""
	I1206 10:08:25.925864  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.925873  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:25.925880  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:25.925940  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:25.950501  293728 cri.go:89] found id: ""
	I1206 10:08:25.950537  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.950546  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:25.950552  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:25.950618  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:26.003264  293728 cri.go:89] found id: ""
	I1206 10:08:26.003294  293728 logs.go:282] 0 containers: []
	W1206 10:08:26.003305  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:26.003316  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:26.003327  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:26.046472  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:26.046503  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:26.091770  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:26.091798  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:26.148719  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:26.148755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:26.165689  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:26.165733  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:26.231230  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:26.245490  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:08:26.310812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:26.310914  293728 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:26.314238  293728 out.go:179] * Enabled addons: 
	I1206 10:08:26.317143  293728 addons.go:530] duration metric: took 1m53.881766525s for enable addons: enabled=[]
	I1206 10:08:28.731518  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:28.742380  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:28.742460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:28.768392  293728 cri.go:89] found id: ""
	I1206 10:08:28.768416  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.768425  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:28.768431  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:28.768489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:28.795017  293728 cri.go:89] found id: ""
	I1206 10:08:28.795043  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.795052  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:28.795059  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:28.795130  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:28.831707  293728 cri.go:89] found id: ""
	I1206 10:08:28.831734  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.831742  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:28.831748  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:28.831807  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:28.857267  293728 cri.go:89] found id: ""
	I1206 10:08:28.857293  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.857304  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:28.857317  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:28.857415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:28.887732  293728 cri.go:89] found id: ""
	I1206 10:08:28.887754  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.887762  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:28.887769  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:28.887827  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:28.912905  293728 cri.go:89] found id: ""
	I1206 10:08:28.912970  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.912984  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:28.912992  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:28.913051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:28.937740  293728 cri.go:89] found id: ""
	I1206 10:08:28.937764  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.937774  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:28.937781  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:28.937840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:28.964042  293728 cri.go:89] found id: ""
	I1206 10:08:28.964111  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.964126  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:28.964135  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:28.964147  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:29.034399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:29.034439  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:29.059150  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:29.059176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:29.134200  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:29.134222  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:29.134235  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:29.160868  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:29.160901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:31.689201  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:31.700497  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:31.700569  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:31.726402  293728 cri.go:89] found id: ""
	I1206 10:08:31.726426  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.726434  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:31.726441  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:31.726503  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:31.752620  293728 cri.go:89] found id: ""
	I1206 10:08:31.752644  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.752652  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:31.752659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:31.752720  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:31.778722  293728 cri.go:89] found id: ""
	I1206 10:08:31.778749  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.778758  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:31.778764  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:31.778825  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:31.804730  293728 cri.go:89] found id: ""
	I1206 10:08:31.804754  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.804762  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:31.804768  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:31.804828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:31.834276  293728 cri.go:89] found id: ""
	I1206 10:08:31.834303  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.834312  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:31.834322  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:31.834388  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:31.859721  293728 cri.go:89] found id: ""
	I1206 10:08:31.859744  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.859752  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:31.859759  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:31.859889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:31.888679  293728 cri.go:89] found id: ""
	I1206 10:08:31.888746  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.888760  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:31.888767  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:31.888828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:31.915769  293728 cri.go:89] found id: ""
	I1206 10:08:31.915794  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.915804  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:31.915812  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:31.915825  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:31.929129  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:31.929155  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:32.017380  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:32.017406  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:32.017420  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:32.046135  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:32.046218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:32.081462  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:32.081485  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.642406  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:34.653187  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:34.653263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:34.683091  293728 cri.go:89] found id: ""
	I1206 10:08:34.683116  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.683124  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:34.683130  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:34.683189  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:34.709426  293728 cri.go:89] found id: ""
	I1206 10:08:34.709453  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.709462  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:34.709468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:34.709528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:34.740189  293728 cri.go:89] found id: ""
	I1206 10:08:34.740215  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.740223  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:34.740230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:34.740289  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:34.769902  293728 cri.go:89] found id: ""
	I1206 10:08:34.769932  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.769942  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:34.769954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:34.770026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:34.797331  293728 cri.go:89] found id: ""
	I1206 10:08:34.797358  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.797367  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:34.797374  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:34.797434  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:34.823286  293728 cri.go:89] found id: ""
	I1206 10:08:34.823309  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.823318  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:34.823324  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:34.823406  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:34.849130  293728 cri.go:89] found id: ""
	I1206 10:08:34.849153  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.849162  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:34.849168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:34.849229  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:34.873883  293728 cri.go:89] found id: ""
	I1206 10:08:34.873905  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.873913  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:34.873922  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:34.873933  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.929942  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:34.929976  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:34.944124  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:34.944205  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:35.057155  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:35.057180  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:35.057193  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:35.090699  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:35.090741  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:37.620713  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:37.631409  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:37.631478  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:37.668926  293728 cri.go:89] found id: ""
	I1206 10:08:37.668949  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.668958  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:37.668966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:37.669025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:37.698809  293728 cri.go:89] found id: ""
	I1206 10:08:37.698831  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.698840  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:37.698846  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:37.698905  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:37.726123  293728 cri.go:89] found id: ""
	I1206 10:08:37.726146  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.726155  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:37.726161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:37.726219  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:37.750745  293728 cri.go:89] found id: ""
	I1206 10:08:37.750818  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.750842  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:37.750861  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:37.750945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:37.777744  293728 cri.go:89] found id: ""
	I1206 10:08:37.777814  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.777837  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:37.777857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:37.777945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:37.804124  293728 cri.go:89] found id: ""
	I1206 10:08:37.804151  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.804160  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:37.804166  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:37.804243  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:37.828930  293728 cri.go:89] found id: ""
	I1206 10:08:37.828995  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.829010  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:37.829017  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:37.829076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:37.853436  293728 cri.go:89] found id: ""
	I1206 10:08:37.853459  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.853468  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:37.853476  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:37.853493  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:37.910673  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:37.910709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:37.926464  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:37.926504  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:38.046192  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:38.046217  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:38.046230  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:38.078770  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:38.078805  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:40.613605  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:40.624180  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:40.624256  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:40.648680  293728 cri.go:89] found id: ""
	I1206 10:08:40.648706  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.648715  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:40.648721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:40.648783  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:40.674691  293728 cri.go:89] found id: ""
	I1206 10:08:40.674716  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.674725  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:40.674732  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:40.674802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:40.700970  293728 cri.go:89] found id: ""
	I1206 10:08:40.700997  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.701006  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:40.701013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:40.701076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:40.729911  293728 cri.go:89] found id: ""
	I1206 10:08:40.729940  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.729949  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:40.729956  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:40.730020  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:40.755581  293728 cri.go:89] found id: ""
	I1206 10:08:40.755611  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.755620  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:40.755626  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:40.755686  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:40.781938  293728 cri.go:89] found id: ""
	I1206 10:08:40.782007  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.782030  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:40.782051  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:40.782139  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:40.811855  293728 cri.go:89] found id: ""
	I1206 10:08:40.811880  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.811889  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:40.811895  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:40.811961  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:40.841527  293728 cri.go:89] found id: ""
	I1206 10:08:40.841553  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.841562  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:40.841571  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:40.841583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:40.854956  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:40.854983  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:40.924783  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:40.924807  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:40.924823  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:40.950611  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:40.950646  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:41.021978  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:41.022008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:43.596447  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:43.607463  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:43.607540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:43.632638  293728 cri.go:89] found id: ""
	I1206 10:08:43.632660  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.632668  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:43.632675  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:43.632737  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:43.657538  293728 cri.go:89] found id: ""
	I1206 10:08:43.657616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.657632  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:43.657639  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:43.657711  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:43.683595  293728 cri.go:89] found id: ""
	I1206 10:08:43.683621  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.683630  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:43.683636  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:43.683706  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:43.709348  293728 cri.go:89] found id: ""
	I1206 10:08:43.709371  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.709380  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:43.709387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:43.709451  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:43.734592  293728 cri.go:89] found id: ""
	I1206 10:08:43.734616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.734625  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:43.734631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:43.734689  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:43.761297  293728 cri.go:89] found id: ""
	I1206 10:08:43.761362  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.761387  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:43.761405  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:43.761493  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:43.789795  293728 cri.go:89] found id: ""
	I1206 10:08:43.789831  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.789840  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:43.789847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:43.789919  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:43.817708  293728 cri.go:89] found id: ""
	I1206 10:08:43.817735  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.817744  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:43.817762  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:43.817774  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:43.831448  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:43.831483  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:43.897033  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:43.897107  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:43.897131  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:43.922955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:43.922990  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:43.960423  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:43.960457  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:46.534389  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:46.545120  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:46.545205  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:46.570287  293728 cri.go:89] found id: ""
	I1206 10:08:46.570313  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.570322  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:46.570328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:46.570391  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:46.600524  293728 cri.go:89] found id: ""
	I1206 10:08:46.600609  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.600631  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:46.600650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:46.600734  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:46.627292  293728 cri.go:89] found id: ""
	I1206 10:08:46.627314  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.627322  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:46.627328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:46.627424  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:46.652620  293728 cri.go:89] found id: ""
	I1206 10:08:46.652642  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.652651  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:46.652657  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:46.652716  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:46.681992  293728 cri.go:89] found id: ""
	I1206 10:08:46.682015  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.682023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:46.682029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:46.682087  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:46.708290  293728 cri.go:89] found id: ""
	I1206 10:08:46.708363  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.708408  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:46.708434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:46.708528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:46.737816  293728 cri.go:89] found id: ""
	I1206 10:08:46.737890  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.737915  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:46.737935  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:46.738021  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:46.768334  293728 cri.go:89] found id: ""
	I1206 10:08:46.768407  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.768430  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:46.768451  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:46.768491  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:46.782268  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:46.782344  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:46.850687  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:46.850714  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:46.850727  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:46.877310  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:46.877362  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:46.909345  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:46.909376  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:49.467346  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:49.477899  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:49.477971  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:49.502546  293728 cri.go:89] found id: ""
	I1206 10:08:49.502569  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.502578  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:49.502584  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:49.502646  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:49.527592  293728 cri.go:89] found id: ""
	I1206 10:08:49.527663  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.527686  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:49.527699  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:49.527760  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:49.553748  293728 cri.go:89] found id: ""
	I1206 10:08:49.553770  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.553778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:49.553784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:49.553841  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:49.580182  293728 cri.go:89] found id: ""
	I1206 10:08:49.580205  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.580214  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:49.580220  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:49.580285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:49.609009  293728 cri.go:89] found id: ""
	I1206 10:08:49.609034  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.609043  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:49.609050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:49.609114  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:49.634196  293728 cri.go:89] found id: ""
	I1206 10:08:49.634218  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.634227  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:49.634233  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:49.634293  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:49.660015  293728 cri.go:89] found id: ""
	I1206 10:08:49.660038  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.660047  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:49.660053  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:49.660115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:49.685329  293728 cri.go:89] found id: ""
	I1206 10:08:49.685355  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.685364  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:49.685373  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:49.685385  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:49.699189  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:49.699218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:49.768229  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:49.768253  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:49.768267  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:49.794221  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:49.794255  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:49.825320  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:49.825349  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:52.381962  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:52.392897  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:52.392974  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:52.421172  293728 cri.go:89] found id: ""
	I1206 10:08:52.421197  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.421206  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:52.421212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:52.421276  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:52.449281  293728 cri.go:89] found id: ""
	I1206 10:08:52.449305  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.449313  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:52.449320  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:52.449378  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:52.474517  293728 cri.go:89] found id: ""
	I1206 10:08:52.474539  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.474547  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:52.474553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:52.474616  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:52.500435  293728 cri.go:89] found id: ""
	I1206 10:08:52.500458  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.500466  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:52.500473  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:52.500532  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:52.526935  293728 cri.go:89] found id: ""
	I1206 10:08:52.526957  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.526965  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:52.526972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:52.527031  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:52.553625  293728 cri.go:89] found id: ""
	I1206 10:08:52.553646  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.553654  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:52.553663  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:52.553721  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:52.580092  293728 cri.go:89] found id: ""
	I1206 10:08:52.580169  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.580194  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:52.580206  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:52.580269  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:52.609595  293728 cri.go:89] found id: ""
	I1206 10:08:52.609622  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.609631  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:52.609640  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:52.609658  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:52.666423  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:52.666460  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:52.680542  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:52.680572  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:52.745123  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:52.745142  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:52.745154  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:52.771578  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:52.771612  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:55.300596  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:55.311733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:55.311837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:55.337436  293728 cri.go:89] found id: ""
	I1206 10:08:55.337466  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.337475  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:55.337482  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:55.337557  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:55.362426  293728 cri.go:89] found id: ""
	I1206 10:08:55.362449  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.362457  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:55.362462  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:55.362539  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:55.388462  293728 cri.go:89] found id: ""
	I1206 10:08:55.388488  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.388497  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:55.388503  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:55.388567  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:55.417368  293728 cri.go:89] found id: ""
	I1206 10:08:55.417391  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.417400  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:55.417406  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:55.417465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:55.444014  293728 cri.go:89] found id: ""
	I1206 10:08:55.444052  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.444061  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:55.444067  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:55.444126  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:55.473384  293728 cri.go:89] found id: ""
	I1206 10:08:55.473408  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.473417  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:55.473423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:55.473485  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:55.499095  293728 cri.go:89] found id: ""
	I1206 10:08:55.499119  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.499128  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:55.499134  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:55.499193  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:55.530488  293728 cri.go:89] found id: ""
	I1206 10:08:55.530560  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.530585  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:55.530607  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:55.530642  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:55.543996  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:55.544023  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:55.609232  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:55.600433    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.601179    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.602847    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.603477    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.605074    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:55.600433    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.601179    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.602847    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.603477    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.605074    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:55.609295  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:55.609315  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:55.635259  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:55.635292  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:55.663234  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:55.663263  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:58.219942  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:58.240184  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:58.240251  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:58.288171  293728 cri.go:89] found id: ""
	I1206 10:08:58.288193  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.288201  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:58.288208  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:58.288267  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:58.326999  293728 cri.go:89] found id: ""
	I1206 10:08:58.327020  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.327029  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:58.327035  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:58.327104  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:58.354289  293728 cri.go:89] found id: ""
	I1206 10:08:58.354316  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.354325  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:58.354331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:58.354392  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:58.378166  293728 cri.go:89] found id: ""
	I1206 10:08:58.378195  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.378204  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:58.378210  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:58.378270  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:58.405700  293728 cri.go:89] found id: ""
	I1206 10:08:58.405721  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.405734  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:58.405740  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:58.405800  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:58.430772  293728 cri.go:89] found id: ""
	I1206 10:08:58.430800  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.430809  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:58.430816  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:58.430882  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:58.455749  293728 cri.go:89] found id: ""
	I1206 10:08:58.455777  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.455787  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:58.455793  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:58.455854  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:58.480448  293728 cri.go:89] found id: ""
	I1206 10:08:58.480491  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.480502  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:58.480512  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:58.480527  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:58.536659  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:58.536697  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:58.550566  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:58.550589  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:58.618059  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:58.608926    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.609448    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.611304    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.612003    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.613723    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:58.608926    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.609448    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.611304    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.612003    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.613723    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:58.618081  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:58.618093  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:58.643111  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:58.643142  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:01.172811  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:01.189894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:01.189970  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:01.216506  293728 cri.go:89] found id: ""
	I1206 10:09:01.216533  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.216542  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:01.216549  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:01.216610  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:01.248643  293728 cri.go:89] found id: ""
	I1206 10:09:01.248667  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.248675  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:01.248681  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:01.248754  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:01.282778  293728 cri.go:89] found id: ""
	I1206 10:09:01.282799  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.282808  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:01.282814  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:01.282874  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:01.317892  293728 cri.go:89] found id: ""
	I1206 10:09:01.317914  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.317923  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:01.317929  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:01.317996  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:01.344569  293728 cri.go:89] found id: ""
	I1206 10:09:01.344596  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.344606  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:01.344612  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:01.344675  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:01.374785  293728 cri.go:89] found id: ""
	I1206 10:09:01.374812  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.374822  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:01.374829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:01.374913  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:01.399962  293728 cri.go:89] found id: ""
	I1206 10:09:01.399986  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.399995  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:01.400001  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:01.400120  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:01.426824  293728 cri.go:89] found id: ""
	I1206 10:09:01.426850  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.426859  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:01.426877  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:01.426904  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:01.484968  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:01.485001  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:01.506470  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:01.506550  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:01.586157  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:01.577286    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.578043    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.579813    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.580524    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.582153    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:01.577286    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.578043    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.579813    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.580524    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.582153    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:01.586226  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:01.586241  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:01.616859  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:01.617050  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:04.147855  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:04.161529  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:04.161601  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:04.185793  293728 cri.go:89] found id: ""
	I1206 10:09:04.185817  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.185826  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:04.185832  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:04.185893  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:04.213785  293728 cri.go:89] found id: ""
	I1206 10:09:04.213809  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.213818  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:04.213824  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:04.213886  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:04.245746  293728 cri.go:89] found id: ""
	I1206 10:09:04.245769  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.245778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:04.245784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:04.245844  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:04.276836  293728 cri.go:89] found id: ""
	I1206 10:09:04.276864  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.276873  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:04.276879  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:04.276949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:04.307027  293728 cri.go:89] found id: ""
	I1206 10:09:04.307054  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.307089  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:04.307096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:04.307171  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:04.332480  293728 cri.go:89] found id: ""
	I1206 10:09:04.332503  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.332511  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:04.332518  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:04.332580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:04.359083  293728 cri.go:89] found id: ""
	I1206 10:09:04.359105  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.359113  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:04.359119  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:04.359178  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:04.384459  293728 cri.go:89] found id: ""
	I1206 10:09:04.384527  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.384560  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:04.384576  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:04.384589  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:04.398476  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:04.398508  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:04.464529  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:04.455141    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.455968    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.457782    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.458361    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.459895    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:04.455141    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.455968    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.457782    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.458361    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.459895    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:04.464551  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:04.464564  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:04.493800  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:04.493842  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:04.533422  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:04.533455  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:07.095340  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:07.106226  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:07.106321  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:07.133785  293728 cri.go:89] found id: ""
	I1206 10:09:07.133849  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.133886  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:07.133907  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:07.133972  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:07.169905  293728 cri.go:89] found id: ""
	I1206 10:09:07.169932  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.169957  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:07.169964  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:07.170039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:07.198212  293728 cri.go:89] found id: ""
	I1206 10:09:07.198285  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.198309  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:07.198329  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:07.198499  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:07.236730  293728 cri.go:89] found id: ""
	I1206 10:09:07.236809  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.236842  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:07.236862  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:07.236969  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:07.264908  293728 cri.go:89] found id: ""
	I1206 10:09:07.264984  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.265015  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:07.265037  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:07.265147  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:07.293030  293728 cri.go:89] found id: ""
	I1206 10:09:07.293102  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.293125  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:07.293146  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:07.293253  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:07.320479  293728 cri.go:89] found id: ""
	I1206 10:09:07.320542  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.320572  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:07.320600  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:07.320712  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:07.346369  293728 cri.go:89] found id: ""
	I1206 10:09:07.346431  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.346461  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:07.346486  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:07.346524  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:07.375165  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:07.375244  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:07.433189  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:07.433225  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:07.447472  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:07.447500  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:07.536150  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:07.524233    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.525315    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527184    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527855    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.532128    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:07.524233    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.525315    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527184    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527855    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.532128    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:07.536173  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:07.536186  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:10.062333  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:10.073694  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:10.073767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:10.101307  293728 cri.go:89] found id: ""
	I1206 10:09:10.101330  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.101339  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:10.101346  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:10.101413  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:10.128394  293728 cri.go:89] found id: ""
	I1206 10:09:10.128420  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.128428  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:10.128436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:10.128497  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:10.154510  293728 cri.go:89] found id: ""
	I1206 10:09:10.154536  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.154545  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:10.154552  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:10.154611  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:10.179782  293728 cri.go:89] found id: ""
	I1206 10:09:10.179808  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.179816  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:10.179822  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:10.179888  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:10.210072  293728 cri.go:89] found id: ""
	I1206 10:09:10.210142  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.210171  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:10.210201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:10.210315  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:10.245657  293728 cri.go:89] found id: ""
	I1206 10:09:10.245676  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.245684  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:10.245691  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:10.245748  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:10.282232  293728 cri.go:89] found id: ""
	I1206 10:09:10.282305  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.282345  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:10.282365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:10.282454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:10.313160  293728 cri.go:89] found id: ""
	I1206 10:09:10.313225  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.313239  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:10.313249  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:10.313261  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:10.373196  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:10.373230  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:10.386792  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:10.386819  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:10.450525  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:10.442280    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.442968    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.444545    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.445040    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.446664    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:10.442280    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.442968    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.444545    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.445040    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.446664    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:10.450547  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:10.450560  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:10.476832  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:10.476869  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:13.012652  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:13.023659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:13.023732  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:13.047365  293728 cri.go:89] found id: ""
	I1206 10:09:13.047458  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.047473  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:13.047480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:13.047541  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:13.072937  293728 cri.go:89] found id: ""
	I1206 10:09:13.072961  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.072970  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:13.072987  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:13.073048  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:13.097439  293728 cri.go:89] found id: ""
	I1206 10:09:13.097515  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.097531  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:13.097539  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:13.097600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:13.123273  293728 cri.go:89] found id: ""
	I1206 10:09:13.123307  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.123316  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:13.123323  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:13.123426  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:13.149441  293728 cri.go:89] found id: ""
	I1206 10:09:13.149518  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.149534  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:13.149542  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:13.149608  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:13.174275  293728 cri.go:89] found id: ""
	I1206 10:09:13.174298  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.174306  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:13.174313  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:13.174379  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:13.203852  293728 cri.go:89] found id: ""
	I1206 10:09:13.203926  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.203942  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:13.203951  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:13.204013  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:13.237842  293728 cri.go:89] found id: ""
	I1206 10:09:13.237866  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.237875  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:13.237884  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:13.237899  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:13.305042  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:13.305078  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:13.319151  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:13.319178  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:13.383092  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:13.374391    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.375129    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.376927    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.377619    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.379235    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:13.374391    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.375129    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.376927    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.377619    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.379235    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:13.383112  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:13.383123  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:13.409266  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:13.409295  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:15.937340  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:15.948165  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:15.948296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:15.973427  293728 cri.go:89] found id: ""
	I1206 10:09:15.973452  293728 logs.go:282] 0 containers: []
	W1206 10:09:15.973461  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:15.973467  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:15.973529  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:16.006761  293728 cri.go:89] found id: ""
	I1206 10:09:16.006806  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.006816  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:16.006824  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:16.006907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:16.034447  293728 cri.go:89] found id: ""
	I1206 10:09:16.034483  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.034492  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:16.034499  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:16.034572  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:16.060884  293728 cri.go:89] found id: ""
	I1206 10:09:16.060955  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.060972  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:16.060979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:16.061039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:16.090437  293728 cri.go:89] found id: ""
	I1206 10:09:16.090461  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.090470  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:16.090476  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:16.090548  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:16.118175  293728 cri.go:89] found id: ""
	I1206 10:09:16.118201  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.118209  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:16.118222  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:16.118342  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:16.144978  293728 cri.go:89] found id: ""
	I1206 10:09:16.145005  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.145015  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:16.145021  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:16.145083  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:16.169350  293728 cri.go:89] found id: ""
	I1206 10:09:16.169378  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.169392  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:16.169401  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:16.169412  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:16.228680  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:16.228755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:16.243103  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:16.243179  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:16.316618  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:16.307974    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.308682    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310238    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310832    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.312513    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:16.307974    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.308682    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310238    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310832    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.312513    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:16.316645  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:16.316658  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:16.342620  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:16.342651  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:18.872579  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:18.883111  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:18.883184  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:18.909365  293728 cri.go:89] found id: ""
	I1206 10:09:18.909393  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.909402  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:18.909410  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:18.909480  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:18.933714  293728 cri.go:89] found id: ""
	I1206 10:09:18.933737  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.933746  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:18.933752  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:18.933811  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:18.963141  293728 cri.go:89] found id: ""
	I1206 10:09:18.963206  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.963228  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:18.963245  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:18.963333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:18.988486  293728 cri.go:89] found id: ""
	I1206 10:09:18.988511  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.988519  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:18.988526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:18.988604  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:19.020422  293728 cri.go:89] found id: ""
	I1206 10:09:19.020448  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.020456  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:19.020463  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:19.020543  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:19.045103  293728 cri.go:89] found id: ""
	I1206 10:09:19.045164  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.045179  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:19.045186  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:19.045245  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:19.069289  293728 cri.go:89] found id: ""
	I1206 10:09:19.069322  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.069331  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:19.069337  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:19.069403  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:19.094504  293728 cri.go:89] found id: ""
	I1206 10:09:19.094539  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.094547  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:19.094557  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:19.094569  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:19.108440  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:19.108469  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:19.175508  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:19.166822    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.167472    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169260    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169788    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.171507    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:19.166822    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.167472    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169260    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169788    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.171507    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:19.175529  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:19.175542  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:19.201390  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:19.201424  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:19.243342  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:19.243364  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:21.808230  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:21.818876  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:21.818955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:21.848634  293728 cri.go:89] found id: ""
	I1206 10:09:21.848655  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.848663  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:21.848669  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:21.848728  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:21.872798  293728 cri.go:89] found id: ""
	I1206 10:09:21.872861  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.872875  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:21.872882  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:21.872938  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:21.900148  293728 cri.go:89] found id: ""
	I1206 10:09:21.900174  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.900183  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:21.900190  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:21.900250  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:21.924786  293728 cri.go:89] found id: ""
	I1206 10:09:21.924813  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.924822  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:21.924829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:21.924915  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:21.954178  293728 cri.go:89] found id: ""
	I1206 10:09:21.954212  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.954221  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:21.954227  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:21.954296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:21.979818  293728 cri.go:89] found id: ""
	I1206 10:09:21.979842  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.979850  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:21.979857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:21.979916  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:22.008409  293728 cri.go:89] found id: ""
	I1206 10:09:22.008435  293728 logs.go:282] 0 containers: []
	W1206 10:09:22.008445  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:22.008452  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:22.008527  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:22.035338  293728 cri.go:89] found id: ""
	I1206 10:09:22.035363  293728 logs.go:282] 0 containers: []
	W1206 10:09:22.035396  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:22.035407  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:22.035418  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:22.091435  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:22.091472  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:22.105532  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:22.105567  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:22.171773  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:22.163104    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.163868    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.165557    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.166181    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.167828    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:22.163104    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.163868    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.165557    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.166181    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.167828    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:22.171793  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:22.171806  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:22.197667  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:22.197706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:24.735529  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:24.748375  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:24.748558  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:24.788906  293728 cri.go:89] found id: ""
	I1206 10:09:24.788978  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.789002  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:24.789024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:24.789113  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:24.818364  293728 cri.go:89] found id: ""
	I1206 10:09:24.818431  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.818453  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:24.818472  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:24.818564  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:24.845760  293728 cri.go:89] found id: ""
	I1206 10:09:24.845802  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.845811  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:24.845817  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:24.845889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:24.872973  293728 cri.go:89] found id: ""
	I1206 10:09:24.872997  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.873006  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:24.873012  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:24.873076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:24.902758  293728 cri.go:89] found id: ""
	I1206 10:09:24.902791  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.902801  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:24.902809  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:24.902885  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:24.929539  293728 cri.go:89] found id: ""
	I1206 10:09:24.929565  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.929575  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:24.929582  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:24.929665  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:24.955731  293728 cri.go:89] found id: ""
	I1206 10:09:24.955806  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.955822  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:24.955829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:24.955891  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:24.980673  293728 cri.go:89] found id: ""
	I1206 10:09:24.980704  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.980713  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:24.980722  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:24.980734  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:25.017868  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:25.017899  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:25.077472  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:25.077510  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:25.093107  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:25.093139  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:25.164597  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:25.155645    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.156390    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158149    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158952    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.160572    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:25.155645    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.156390    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158149    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158952    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.160572    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:25.164635  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:25.164649  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:27.694118  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:27.704932  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:27.705013  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:27.734684  293728 cri.go:89] found id: ""
	I1206 10:09:27.734762  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.734784  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:27.734802  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:27.734892  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:27.771355  293728 cri.go:89] found id: ""
	I1206 10:09:27.771442  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.771466  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:27.771485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:27.771568  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:27.800742  293728 cri.go:89] found id: ""
	I1206 10:09:27.800818  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.800836  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:27.800844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:27.800907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:27.827029  293728 cri.go:89] found id: ""
	I1206 10:09:27.827058  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.827068  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:27.827075  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:27.827136  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:27.853299  293728 cri.go:89] found id: ""
	I1206 10:09:27.853323  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.853332  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:27.853339  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:27.853431  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:27.878371  293728 cri.go:89] found id: ""
	I1206 10:09:27.878394  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.878402  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:27.878415  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:27.878525  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:27.903247  293728 cri.go:89] found id: ""
	I1206 10:09:27.903269  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.903277  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:27.903283  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:27.903405  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:27.927665  293728 cri.go:89] found id: ""
	I1206 10:09:27.927687  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.927695  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:27.927703  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:27.927714  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:27.993787  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:27.984910    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.985739    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.987460    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.988125    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.989907    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:27.984910    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.985739    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.987460    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.988125    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.989907    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:27.993808  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:27.993820  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:28.021097  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:28.021132  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:28.050410  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:28.050438  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:28.108602  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:28.108636  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:30.622836  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:30.633282  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:30.633354  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:30.657827  293728 cri.go:89] found id: ""
	I1206 10:09:30.657850  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.657859  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:30.657865  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:30.657929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:30.685495  293728 cri.go:89] found id: ""
	I1206 10:09:30.685525  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.685534  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:30.685541  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:30.685611  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:30.710543  293728 cri.go:89] found id: ""
	I1206 10:09:30.710576  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.710585  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:30.710591  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:30.710661  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:30.738572  293728 cri.go:89] found id: ""
	I1206 10:09:30.738667  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.738690  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:30.738710  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:30.738815  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:30.782603  293728 cri.go:89] found id: ""
	I1206 10:09:30.782684  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.782706  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:30.782725  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:30.782829  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:30.810264  293728 cri.go:89] found id: ""
	I1206 10:09:30.810342  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.810364  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:30.810387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:30.810479  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:30.835864  293728 cri.go:89] found id: ""
	I1206 10:09:30.835944  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.835960  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:30.835968  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:30.836050  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:30.860832  293728 cri.go:89] found id: ""
	I1206 10:09:30.860858  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.860867  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:30.860876  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:30.860887  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:30.917397  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:30.917433  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:30.931490  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:30.931572  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:31.004606  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:30.993339    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.994064    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.995768    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.996292    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.997903    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:30.993339    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.994064    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.995768    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.996292    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.997903    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:31.004692  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:31.004725  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:31.033130  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:31.033168  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:33.563282  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:33.574558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:33.574631  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:33.600752  293728 cri.go:89] found id: ""
	I1206 10:09:33.600784  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.600797  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:33.600804  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:33.600876  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:33.626879  293728 cri.go:89] found id: ""
	I1206 10:09:33.626909  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.626919  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:33.626925  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:33.626987  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:33.652921  293728 cri.go:89] found id: ""
	I1206 10:09:33.652945  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.652954  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:33.652960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:33.653025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:33.678584  293728 cri.go:89] found id: ""
	I1206 10:09:33.678619  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.678627  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:33.678634  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:33.678704  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:33.706401  293728 cri.go:89] found id: ""
	I1206 10:09:33.706424  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.706433  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:33.706439  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:33.706514  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:33.754300  293728 cri.go:89] found id: ""
	I1206 10:09:33.754326  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.754334  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:33.754341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:33.754410  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:33.782351  293728 cri.go:89] found id: ""
	I1206 10:09:33.782388  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.782397  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:33.782410  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:33.782479  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:33.809362  293728 cri.go:89] found id: ""
	I1206 10:09:33.809399  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.809407  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:33.809417  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:33.809428  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:33.845485  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:33.845510  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:33.902066  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:33.902106  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:33.915843  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:33.915871  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:33.983566  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:33.974999    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.975872    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977595    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977932    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.979555    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:33.974999    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.975872    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977595    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977932    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.979555    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:33.983589  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:33.983610  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:36.512857  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:36.524687  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:36.524752  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:36.559537  293728 cri.go:89] found id: ""
	I1206 10:09:36.559559  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.559568  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:36.559574  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:36.559641  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:36.584964  293728 cri.go:89] found id: ""
	I1206 10:09:36.585033  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.585049  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:36.585056  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:36.585124  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:36.610724  293728 cri.go:89] found id: ""
	I1206 10:09:36.610750  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.610759  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:36.610765  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:36.610824  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:36.641090  293728 cri.go:89] found id: ""
	I1206 10:09:36.641158  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.641185  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:36.641198  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:36.641287  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:36.665900  293728 cri.go:89] found id: ""
	I1206 10:09:36.665926  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.665935  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:36.665941  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:36.666004  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:36.693620  293728 cri.go:89] found id: ""
	I1206 10:09:36.693650  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.693659  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:36.693666  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:36.693731  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:36.734543  293728 cri.go:89] found id: ""
	I1206 10:09:36.734621  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.734646  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:36.734665  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:36.734757  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:36.776081  293728 cri.go:89] found id: ""
	I1206 10:09:36.776146  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.776168  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:36.776188  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:36.776226  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:36.792679  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:36.792711  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:36.861792  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:36.852688    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.853295    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855348    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855858    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.857501    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:36.852688    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.853295    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855348    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855858    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.857501    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:36.861815  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:36.861828  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:36.887686  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:36.887722  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:36.915203  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:36.915229  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:39.473166  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:39.484986  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:39.485070  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:39.519022  293728 cri.go:89] found id: ""
	I1206 10:09:39.519084  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.519097  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:39.519105  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:39.519183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:39.550949  293728 cri.go:89] found id: ""
	I1206 10:09:39.550987  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.551002  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:39.551009  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:39.551083  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:39.576090  293728 cri.go:89] found id: ""
	I1206 10:09:39.576120  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.576129  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:39.576136  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:39.576199  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:39.602338  293728 cri.go:89] found id: ""
	I1206 10:09:39.602364  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.602374  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:39.602386  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:39.602447  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:39.627803  293728 cri.go:89] found id: ""
	I1206 10:09:39.627841  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.627850  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:39.627857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:39.627929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:39.653348  293728 cri.go:89] found id: ""
	I1206 10:09:39.653376  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.653385  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:39.653392  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:39.653454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:39.679324  293728 cri.go:89] found id: ""
	I1206 10:09:39.679418  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.679434  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:39.679442  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:39.679515  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:39.704684  293728 cri.go:89] found id: ""
	I1206 10:09:39.704708  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.704717  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:39.704726  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:39.704738  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:39.764873  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:39.764905  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:39.779533  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:39.779558  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:39.852807  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:39.844502    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.845176    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.846778    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.847166    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.848722    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:39.844502    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.845176    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.846778    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.847166    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.848722    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:39.852829  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:39.852842  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:39.879753  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:39.879787  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:42.409609  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:42.421328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:42.421397  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:42.447308  293728 cri.go:89] found id: ""
	I1206 10:09:42.447333  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.447342  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:42.447349  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:42.447440  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:42.481946  293728 cri.go:89] found id: ""
	I1206 10:09:42.481977  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.481985  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:42.481992  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:42.482055  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:42.514307  293728 cri.go:89] found id: ""
	I1206 10:09:42.514378  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.514401  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:42.514420  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:42.514512  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:42.546780  293728 cri.go:89] found id: ""
	I1206 10:09:42.546806  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.546815  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:42.546822  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:42.546891  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:42.573407  293728 cri.go:89] found id: ""
	I1206 10:09:42.573430  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.573439  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:42.573445  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:42.573501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:42.599133  293728 cri.go:89] found id: ""
	I1206 10:09:42.599156  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.599164  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:42.599171  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:42.599233  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:42.625000  293728 cri.go:89] found id: ""
	I1206 10:09:42.625028  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.625037  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:42.625043  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:42.625107  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:42.654408  293728 cri.go:89] found id: ""
	I1206 10:09:42.654436  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.654446  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:42.654455  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:42.654467  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:42.711699  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:42.711733  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:42.727806  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:42.727881  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:42.811421  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:42.801418    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.803078    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.804330    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.805386    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.807056    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:42.801418    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.803078    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.804330    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.805386    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.807056    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:42.811446  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:42.811460  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:42.838410  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:42.838445  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:45.369084  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:45.380279  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:45.380388  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:45.405587  293728 cri.go:89] found id: ""
	I1206 10:09:45.405612  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.405621  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:45.405628  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:45.405688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:45.433060  293728 cri.go:89] found id: ""
	I1206 10:09:45.433088  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.433097  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:45.433103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:45.433164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:45.460740  293728 cri.go:89] found id: ""
	I1206 10:09:45.460763  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.460772  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:45.460778  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:45.460837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:45.497706  293728 cri.go:89] found id: ""
	I1206 10:09:45.497771  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.497793  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:45.497813  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:45.497904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:45.534656  293728 cri.go:89] found id: ""
	I1206 10:09:45.534681  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.534690  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:45.534696  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:45.534770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:45.564269  293728 cri.go:89] found id: ""
	I1206 10:09:45.564350  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.564372  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:45.564387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:45.564474  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:45.588438  293728 cri.go:89] found id: ""
	I1206 10:09:45.588517  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.588539  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:45.588558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:45.588651  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:45.613920  293728 cri.go:89] found id: ""
	I1206 10:09:45.613951  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.613960  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:45.613970  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:45.613980  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:45.641788  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:45.641863  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:45.699089  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:45.699123  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:45.712662  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:45.712734  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:45.793739  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:45.785473    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.786020    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.787671    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.788175    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.789766    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:45.785473    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.786020    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.787671    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.788175    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.789766    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:45.793759  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:45.793773  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:48.320858  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:48.331937  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:48.332070  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:48.356716  293728 cri.go:89] found id: ""
	I1206 10:09:48.356784  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.356798  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:48.356806  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:48.356866  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:48.382138  293728 cri.go:89] found id: ""
	I1206 10:09:48.382172  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.382181  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:48.382188  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:48.382258  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:48.408214  293728 cri.go:89] found id: ""
	I1206 10:09:48.408238  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.408247  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:48.408253  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:48.408313  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:48.433328  293728 cri.go:89] found id: ""
	I1206 10:09:48.433351  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.433360  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:48.433366  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:48.433428  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:48.460263  293728 cri.go:89] found id: ""
	I1206 10:09:48.460284  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.460292  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:48.460298  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:48.460355  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:48.488344  293728 cri.go:89] found id: ""
	I1206 10:09:48.488373  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.488381  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:48.488388  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:48.488452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:48.521629  293728 cri.go:89] found id: ""
	I1206 10:09:48.521658  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.521666  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:48.521673  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:48.521759  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:48.549255  293728 cri.go:89] found id: ""
	I1206 10:09:48.549321  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.549344  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:48.549365  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:48.549392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:48.609413  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:48.609450  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:48.623661  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:48.623688  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:48.693637  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:48.684667    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.685431    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687132    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687585    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.689240    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:48.684667    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.685431    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687132    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687585    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.689240    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:48.693661  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:48.693674  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:48.719587  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:48.719660  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:51.258260  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:51.268785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:51.268856  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:51.295768  293728 cri.go:89] found id: ""
	I1206 10:09:51.295793  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.295801  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:51.295808  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:51.295879  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:51.321853  293728 cri.go:89] found id: ""
	I1206 10:09:51.321886  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.321894  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:51.321900  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:51.321968  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:51.347472  293728 cri.go:89] found id: ""
	I1206 10:09:51.347494  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.347502  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:51.347517  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:51.347575  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:51.371656  293728 cri.go:89] found id: ""
	I1206 10:09:51.371683  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.371692  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:51.371698  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:51.371758  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:51.397262  293728 cri.go:89] found id: ""
	I1206 10:09:51.397289  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.397298  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:51.397305  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:51.397409  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:51.423015  293728 cri.go:89] found id: ""
	I1206 10:09:51.423045  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.423061  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:51.423076  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:51.423149  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:51.454355  293728 cri.go:89] found id: ""
	I1206 10:09:51.454381  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.454390  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:51.454396  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:51.454463  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:51.486768  293728 cri.go:89] found id: ""
	I1206 10:09:51.486808  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.486823  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:51.486832  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:51.486843  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:51.554153  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:51.554192  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:51.568560  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:51.568590  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:51.634642  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:51.626552    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.627100    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.628640    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.629103    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.630610    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:51.626552    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.627100    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.628640    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.629103    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.630610    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:51.634664  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:51.634678  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:51.660429  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:51.660463  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:54.188738  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:54.201905  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:54.201981  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:54.227986  293728 cri.go:89] found id: ""
	I1206 10:09:54.228012  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.228021  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:54.228028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:54.228113  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:54.254201  293728 cri.go:89] found id: ""
	I1206 10:09:54.254235  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.254245  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:54.254283  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:54.254395  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:54.278782  293728 cri.go:89] found id: ""
	I1206 10:09:54.278820  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.278830  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:54.278852  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:54.278935  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:54.303206  293728 cri.go:89] found id: ""
	I1206 10:09:54.303240  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.303249  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:54.303256  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:54.303323  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:54.328700  293728 cri.go:89] found id: ""
	I1206 10:09:54.328726  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.328735  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:54.328741  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:54.328818  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:54.352531  293728 cri.go:89] found id: ""
	I1206 10:09:54.352613  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.352638  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:54.352656  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:54.352746  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:54.381751  293728 cri.go:89] found id: ""
	I1206 10:09:54.381785  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.381795  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:54.381802  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:54.381873  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:54.410917  293728 cri.go:89] found id: ""
	I1206 10:09:54.410993  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.411015  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:54.411037  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:54.411076  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:54.440257  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:54.440285  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:54.500235  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:54.500278  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:54.515938  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:54.515966  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:54.588801  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:54.579599    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.580550    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582125    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582602    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.584281    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:54.579599    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.580550    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582125    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582602    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.584281    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:54.588823  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:54.588836  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:57.116312  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:57.127033  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:57.127111  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:57.152251  293728 cri.go:89] found id: ""
	I1206 10:09:57.152273  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.152282  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:57.152288  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:57.152346  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:57.176684  293728 cri.go:89] found id: ""
	I1206 10:09:57.176758  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.176773  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:57.176781  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:57.176840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:57.202374  293728 cri.go:89] found id: ""
	I1206 10:09:57.202436  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.202470  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:57.202494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:57.202580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:57.227547  293728 cri.go:89] found id: ""
	I1206 10:09:57.227573  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.227582  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:57.227589  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:57.227650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:57.253673  293728 cri.go:89] found id: ""
	I1206 10:09:57.253705  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.253714  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:57.253721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:57.253789  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:57.278618  293728 cri.go:89] found id: ""
	I1206 10:09:57.278644  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.278654  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:57.278660  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:57.278722  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:57.304336  293728 cri.go:89] found id: ""
	I1206 10:09:57.304384  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.304397  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:57.304423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:57.304508  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:57.334469  293728 cri.go:89] found id: ""
	I1206 10:09:57.334492  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.334500  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:57.334508  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:57.334520  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:57.348891  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:57.348922  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:57.415906  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:57.407558    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.408081    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.409719    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.410287    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.411964    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:57.407558    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.408081    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.409719    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.410287    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.411964    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:57.415927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:57.415939  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:57.441880  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:57.441918  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:57.475269  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:57.475297  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:00.036981  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:00.091003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:00.091183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:00.199598  293728 cri.go:89] found id: ""
	I1206 10:10:00.199642  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.199652  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:00.199660  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:00.199761  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:00.291513  293728 cri.go:89] found id: ""
	I1206 10:10:00.291550  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.291562  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:00.291569  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:00.291653  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:00.363428  293728 cri.go:89] found id: ""
	I1206 10:10:00.363514  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.363541  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:00.363559  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:00.363706  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:00.471969  293728 cri.go:89] found id: ""
	I1206 10:10:00.471994  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.472004  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:00.472013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:00.472080  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:00.548937  293728 cri.go:89] found id: ""
	I1206 10:10:00.548960  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.548969  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:00.548976  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:00.549039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:00.612750  293728 cri.go:89] found id: ""
	I1206 10:10:00.612774  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.612783  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:00.612790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:00.612857  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:00.648024  293728 cri.go:89] found id: ""
	I1206 10:10:00.648051  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.648061  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:00.648068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:00.648145  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:00.678506  293728 cri.go:89] found id: ""
	I1206 10:10:00.678587  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.678615  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:00.678636  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:00.678671  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:00.755139  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:00.755237  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:00.771588  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:00.771629  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:00.849622  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:00.840203    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.840934    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.842739    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.843443    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.845027    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:00.840203    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.840934    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.842739    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.843443    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.845027    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:00.849656  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:00.849669  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:00.876546  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:00.876583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:03.409148  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:03.420472  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:03.420547  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:03.449464  293728 cri.go:89] found id: ""
	I1206 10:10:03.449487  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.449496  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:03.449521  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:03.449598  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:03.482241  293728 cri.go:89] found id: ""
	I1206 10:10:03.482267  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.482276  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:03.482286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:03.482349  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:03.512048  293728 cri.go:89] found id: ""
	I1206 10:10:03.512075  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.512084  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:03.512090  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:03.512153  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:03.544039  293728 cri.go:89] found id: ""
	I1206 10:10:03.544064  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.544073  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:03.544080  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:03.544159  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:03.568866  293728 cri.go:89] found id: ""
	I1206 10:10:03.568942  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.568966  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:03.568978  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:03.569071  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:03.595896  293728 cri.go:89] found id: ""
	I1206 10:10:03.595930  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.595940  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:03.595946  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:03.596020  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:03.620834  293728 cri.go:89] found id: ""
	I1206 10:10:03.620863  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.620871  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:03.620878  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:03.620950  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:03.644327  293728 cri.go:89] found id: ""
	I1206 10:10:03.644359  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.644368  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:03.644377  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:03.644392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:03.707856  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:03.699517    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.700161    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.701732    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.702251    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.703903    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:03.699517    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.700161    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.701732    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.702251    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.703903    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:03.707879  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:03.707891  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:03.735529  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:03.735562  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:03.767489  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:03.767516  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:03.831889  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:03.831926  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:06.346582  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:06.357845  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:06.357929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:06.387151  293728 cri.go:89] found id: ""
	I1206 10:10:06.387176  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.387185  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:06.387192  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:06.387256  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:06.413165  293728 cri.go:89] found id: ""
	I1206 10:10:06.413194  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.413203  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:06.413210  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:06.413271  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:06.437677  293728 cri.go:89] found id: ""
	I1206 10:10:06.437701  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.437710  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:06.437716  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:06.437772  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:06.463040  293728 cri.go:89] found id: ""
	I1206 10:10:06.463070  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.463080  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:06.463087  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:06.463150  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:06.494675  293728 cri.go:89] found id: ""
	I1206 10:10:06.494751  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.494774  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:06.494794  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:06.494889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:06.526246  293728 cri.go:89] found id: ""
	I1206 10:10:06.526316  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.526337  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:06.526357  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:06.526440  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:06.559804  293728 cri.go:89] found id: ""
	I1206 10:10:06.559829  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.559839  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:06.559845  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:06.559907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:06.589855  293728 cri.go:89] found id: ""
	I1206 10:10:06.589930  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.589964  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:06.590003  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:06.590032  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:06.616596  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:06.616632  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:06.646994  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:06.647021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:06.702957  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:06.702993  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:06.716751  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:06.716778  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:06.798071  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:06.789752    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.790292    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.791805    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.792344    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.793982    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:06.789752    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.790292    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.791805    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.792344    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.793982    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:09.298347  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:09.308960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:09.309035  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:09.333650  293728 cri.go:89] found id: ""
	I1206 10:10:09.333675  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.333683  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:09.333690  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:09.333767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:09.357861  293728 cri.go:89] found id: ""
	I1206 10:10:09.357885  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.357894  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:09.357900  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:09.358010  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:09.382744  293728 cri.go:89] found id: ""
	I1206 10:10:09.382770  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.382779  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:09.382785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:09.382878  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:09.413180  293728 cri.go:89] found id: ""
	I1206 10:10:09.413259  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.413282  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:09.413295  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:09.413376  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:09.438201  293728 cri.go:89] found id: ""
	I1206 10:10:09.438227  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.438235  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:09.438242  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:09.438300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:09.462981  293728 cri.go:89] found id: ""
	I1206 10:10:09.463058  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.463084  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:09.463103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:09.463199  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:09.489818  293728 cri.go:89] found id: ""
	I1206 10:10:09.489840  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.489849  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:09.489855  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:09.489914  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:09.517662  293728 cri.go:89] found id: ""
	I1206 10:10:09.517689  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.517698  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:09.517707  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:09.517719  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:09.576466  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:09.576502  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:09.590374  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:09.590401  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:09.655862  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:09.646406    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.646998    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.648878    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.649656    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.651513    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:09.646406    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.646998    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.648878    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.649656    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.651513    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:09.655883  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:09.655895  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:09.681441  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:09.681477  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:12.211127  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:12.222215  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:12.222285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:12.247472  293728 cri.go:89] found id: ""
	I1206 10:10:12.247547  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.247562  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:12.247573  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:12.247633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:12.272505  293728 cri.go:89] found id: ""
	I1206 10:10:12.272533  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.272543  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:12.272550  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:12.272638  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:12.297673  293728 cri.go:89] found id: ""
	I1206 10:10:12.297698  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.297707  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:12.297715  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:12.297830  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:12.322568  293728 cri.go:89] found id: ""
	I1206 10:10:12.322609  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.322618  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:12.322625  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:12.322701  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:12.349304  293728 cri.go:89] found id: ""
	I1206 10:10:12.349331  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.349341  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:12.349347  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:12.349443  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:12.375736  293728 cri.go:89] found id: ""
	I1206 10:10:12.375762  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.375771  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:12.375778  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:12.375840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:12.400942  293728 cri.go:89] found id: ""
	I1206 10:10:12.400966  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.400974  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:12.400981  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:12.401040  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:12.426874  293728 cri.go:89] found id: ""
	I1206 10:10:12.426916  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.426926  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:12.426936  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:12.426948  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:12.484510  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:12.484587  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:12.499107  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:12.499186  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:12.572427  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:12.563920    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.564850    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566425    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566780    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.568265    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:12.563920    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.564850    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566425    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566780    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.568265    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:12.572450  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:12.572466  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:12.598814  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:12.598849  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:15.128638  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:15.139805  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:15.139876  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:15.165109  293728 cri.go:89] found id: ""
	I1206 10:10:15.165133  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.165149  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:15.165156  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:15.165219  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:15.196948  293728 cri.go:89] found id: ""
	I1206 10:10:15.196974  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.196982  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:15.196989  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:15.197059  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:15.222058  293728 cri.go:89] found id: ""
	I1206 10:10:15.222082  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.222090  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:15.222096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:15.222155  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:15.248215  293728 cri.go:89] found id: ""
	I1206 10:10:15.248238  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.248247  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:15.248254  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:15.248312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:15.273082  293728 cri.go:89] found id: ""
	I1206 10:10:15.273104  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.273113  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:15.273120  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:15.273179  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:15.298006  293728 cri.go:89] found id: ""
	I1206 10:10:15.298029  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.298037  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:15.298043  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:15.298101  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:15.322519  293728 cri.go:89] found id: ""
	I1206 10:10:15.322542  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.322550  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:15.322557  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:15.322615  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:15.347746  293728 cri.go:89] found id: ""
	I1206 10:10:15.347770  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.347778  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:15.347786  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:15.347797  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:15.361534  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:15.361561  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:15.427348  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:15.418245    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.419137    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421066    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421690    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.423366    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:15.418245    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.419137    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421066    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421690    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.423366    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:15.427370  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:15.427404  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:15.453826  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:15.453864  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:15.487015  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:15.487049  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:18.053317  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:18.064493  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:18.064566  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:18.089748  293728 cri.go:89] found id: ""
	I1206 10:10:18.089773  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.089782  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:18.089789  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:18.089850  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:18.116011  293728 cri.go:89] found id: ""
	I1206 10:10:18.116039  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.116048  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:18.116055  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:18.116116  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:18.146676  293728 cri.go:89] found id: ""
	I1206 10:10:18.146701  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.146710  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:18.146716  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:18.146783  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:18.172596  293728 cri.go:89] found id: ""
	I1206 10:10:18.172621  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.172631  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:18.172643  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:18.172703  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:18.198506  293728 cri.go:89] found id: ""
	I1206 10:10:18.198584  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.198608  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:18.198630  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:18.198747  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:18.230708  293728 cri.go:89] found id: ""
	I1206 10:10:18.230786  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.230812  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:18.230830  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:18.230955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:18.257169  293728 cri.go:89] found id: ""
	I1206 10:10:18.257235  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.257250  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:18.257257  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:18.257317  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:18.285950  293728 cri.go:89] found id: ""
	I1206 10:10:18.285976  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.285985  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:18.285994  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:18.286006  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:18.318446  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:18.318471  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:18.379191  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:18.379227  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:18.393268  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:18.393295  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:18.458997  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:18.449882    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.450796    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452543    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452857    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.454349    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:18.449882    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.450796    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452543    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452857    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.454349    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:18.459023  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:18.459035  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:20.987221  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:20.999561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:20.999633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:21.037750  293728 cri.go:89] found id: ""
	I1206 10:10:21.037771  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.037780  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:21.037786  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:21.037846  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:21.063327  293728 cri.go:89] found id: ""
	I1206 10:10:21.063350  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.063358  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:21.063364  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:21.063448  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:21.088200  293728 cri.go:89] found id: ""
	I1206 10:10:21.088223  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.088231  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:21.088237  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:21.088298  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:21.118025  293728 cri.go:89] found id: ""
	I1206 10:10:21.118051  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.118061  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:21.118068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:21.118126  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:21.143740  293728 cri.go:89] found id: ""
	I1206 10:10:21.143770  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.143779  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:21.143785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:21.143848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:21.169323  293728 cri.go:89] found id: ""
	I1206 10:10:21.169401  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.169417  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:21.169424  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:21.169501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:21.194291  293728 cri.go:89] found id: ""
	I1206 10:10:21.194356  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.194380  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:21.194398  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:21.194490  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:21.219471  293728 cri.go:89] found id: ""
	I1206 10:10:21.219599  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.219653  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:21.219679  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:21.219706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:21.277216  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:21.277252  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:21.291736  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:21.291766  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:21.366215  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:21.357353    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.358173    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.359989    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.360738    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.362264    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:21.357353    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.358173    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.359989    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.360738    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.362264    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:21.366236  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:21.366250  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:21.392405  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:21.392437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:23.923653  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:23.934595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:23.934670  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:23.961107  293728 cri.go:89] found id: ""
	I1206 10:10:23.961130  293728 logs.go:282] 0 containers: []
	W1206 10:10:23.961138  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:23.961145  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:23.961209  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:23.994692  293728 cri.go:89] found id: ""
	I1206 10:10:23.994729  293728 logs.go:282] 0 containers: []
	W1206 10:10:23.994739  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:23.994745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:23.994817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:24.028605  293728 cri.go:89] found id: ""
	I1206 10:10:24.028689  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.028715  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:24.028735  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:24.028848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:24.057290  293728 cri.go:89] found id: ""
	I1206 10:10:24.057317  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.057326  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:24.057333  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:24.057400  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:24.085994  293728 cri.go:89] found id: ""
	I1206 10:10:24.086029  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.086039  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:24.086045  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:24.086128  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:24.112798  293728 cri.go:89] found id: ""
	I1206 10:10:24.112826  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.112835  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:24.112841  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:24.112930  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:24.139149  293728 cri.go:89] found id: ""
	I1206 10:10:24.139175  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.139184  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:24.139190  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:24.139300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:24.165213  293728 cri.go:89] found id: ""
	I1206 10:10:24.165239  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.165248  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:24.165257  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:24.165268  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:24.223441  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:24.223477  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:24.237256  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:24.237282  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:24.303131  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:24.295355    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.295806    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297324    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297646    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.299178    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:24.295355    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.295806    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297324    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297646    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.299178    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:24.303154  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:24.303170  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:24.329120  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:24.329160  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:26.857977  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:26.868844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:26.868920  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:26.893530  293728 cri.go:89] found id: ""
	I1206 10:10:26.893555  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.893563  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:26.893569  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:26.893628  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:26.922692  293728 cri.go:89] found id: ""
	I1206 10:10:26.922718  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.922727  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:26.922733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:26.922794  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:26.948535  293728 cri.go:89] found id: ""
	I1206 10:10:26.948560  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.948569  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:26.948575  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:26.948640  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:26.976097  293728 cri.go:89] found id: ""
	I1206 10:10:26.976167  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.976193  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:26.976212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:26.976300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:27.010083  293728 cri.go:89] found id: ""
	I1206 10:10:27.010161  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.010184  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:27.010229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:27.010333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:27.038839  293728 cri.go:89] found id: ""
	I1206 10:10:27.038913  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.038934  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:27.038954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:27.039084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:27.066982  293728 cri.go:89] found id: ""
	I1206 10:10:27.067063  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.067086  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:27.067105  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:27.067216  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:27.092863  293728 cri.go:89] found id: ""
	I1206 10:10:27.092891  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.092899  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:27.092909  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:27.092950  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:27.120341  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:27.120375  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:27.177452  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:27.177489  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:27.191505  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:27.191533  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:27.260108  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:27.251592    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.252285    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.253999    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.254325    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.255968    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:27.251592    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.252285    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.253999    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.254325    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.255968    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:27.260129  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:27.260141  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:29.785293  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:29.795873  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:29.795947  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:29.826896  293728 cri.go:89] found id: ""
	I1206 10:10:29.826934  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.826944  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:29.826950  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:29.827093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:29.857768  293728 cri.go:89] found id: ""
	I1206 10:10:29.857793  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.857803  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:29.857809  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:29.857881  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:29.885651  293728 cri.go:89] found id: ""
	I1206 10:10:29.885686  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.885696  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:29.885721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:29.885805  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:29.910764  293728 cri.go:89] found id: ""
	I1206 10:10:29.910892  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.910916  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:29.910928  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:29.911014  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:29.937166  293728 cri.go:89] found id: ""
	I1206 10:10:29.937191  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.937201  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:29.937208  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:29.937270  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:29.962684  293728 cri.go:89] found id: ""
	I1206 10:10:29.962717  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.962726  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:29.962733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:29.962799  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:29.993702  293728 cri.go:89] found id: ""
	I1206 10:10:29.993776  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.993799  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:29.993818  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:29.993904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:30.061338  293728 cri.go:89] found id: ""
	I1206 10:10:30.061423  293728 logs.go:282] 0 containers: []
	W1206 10:10:30.061447  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:30.061482  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:30.061514  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:30.110307  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:30.110344  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:30.178825  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:30.178864  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:30.194614  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:30.194641  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:30.269484  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:30.258437    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.259022    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.261951    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263145    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263843    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:30.258437    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.259022    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.261951    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263145    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263843    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:30.269507  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:30.269521  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:32.796483  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:32.807219  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:32.807347  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:32.832338  293728 cri.go:89] found id: ""
	I1206 10:10:32.832365  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.832374  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:32.832381  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:32.832443  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:32.857737  293728 cri.go:89] found id: ""
	I1206 10:10:32.857763  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.857771  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:32.857780  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:32.857840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:32.886514  293728 cri.go:89] found id: ""
	I1206 10:10:32.886537  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.886546  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:32.886553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:32.886622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:32.916133  293728 cri.go:89] found id: ""
	I1206 10:10:32.916157  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.916166  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:32.916172  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:32.916278  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:32.940460  293728 cri.go:89] found id: ""
	I1206 10:10:32.940485  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.940493  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:32.940500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:32.940580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:32.967101  293728 cri.go:89] found id: ""
	I1206 10:10:32.967129  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.967139  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:32.967146  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:32.967255  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:33.003657  293728 cri.go:89] found id: ""
	I1206 10:10:33.003687  293728 logs.go:282] 0 containers: []
	W1206 10:10:33.003696  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:33.003703  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:33.003817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:33.034541  293728 cri.go:89] found id: ""
	I1206 10:10:33.034570  293728 logs.go:282] 0 containers: []
	W1206 10:10:33.034579  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:33.034587  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:33.034599  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:33.103182  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:33.094513    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.095149    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.096956    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.097426    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.099078    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:33.094513    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.095149    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.096956    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.097426    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.099078    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:33.103205  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:33.103219  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:33.129473  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:33.129508  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:33.158555  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:33.158583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:33.216375  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:33.216409  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:35.730137  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:35.743050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:35.743211  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:35.782795  293728 cri.go:89] found id: ""
	I1206 10:10:35.782873  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.782897  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:35.782917  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:35.783049  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:35.810026  293728 cri.go:89] found id: ""
	I1206 10:10:35.810102  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.810126  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:35.810144  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:35.810234  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:35.835162  293728 cri.go:89] found id: ""
	I1206 10:10:35.835240  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.835265  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:35.835286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:35.835412  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:35.860195  293728 cri.go:89] found id: ""
	I1206 10:10:35.860227  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.860236  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:35.860247  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:35.860386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:35.886939  293728 cri.go:89] found id: ""
	I1206 10:10:35.886977  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.886995  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:35.887003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:35.887093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:35.917822  293728 cri.go:89] found id: ""
	I1206 10:10:35.917848  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.917858  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:35.917864  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:35.917944  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:35.945452  293728 cri.go:89] found id: ""
	I1206 10:10:35.945478  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.945488  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:35.945494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:35.945556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:35.986146  293728 cri.go:89] found id: ""
	I1206 10:10:35.986174  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.986183  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:35.986193  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:35.986204  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:36.053722  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:36.053759  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:36.068786  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:36.068815  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:36.132981  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:36.124259    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.124911    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.126650    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.127348    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.128990    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:36.124259    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.124911    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.126650    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.127348    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.128990    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:36.133005  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:36.133018  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:36.158971  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:36.159009  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:38.688989  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:38.699954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:38.700025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:38.732646  293728 cri.go:89] found id: ""
	I1206 10:10:38.732680  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.732689  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:38.732696  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:38.732757  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:38.760849  293728 cri.go:89] found id: ""
	I1206 10:10:38.760878  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.760888  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:38.760894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:38.760952  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:38.793233  293728 cri.go:89] found id: ""
	I1206 10:10:38.793258  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.793267  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:38.793274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:38.793355  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:38.818786  293728 cri.go:89] found id: ""
	I1206 10:10:38.818814  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.818823  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:38.818831  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:38.818925  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:38.845346  293728 cri.go:89] found id: ""
	I1206 10:10:38.845373  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.845382  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:38.845388  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:38.845449  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:38.876064  293728 cri.go:89] found id: ""
	I1206 10:10:38.876088  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.876097  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:38.876103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:38.876193  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:38.901010  293728 cri.go:89] found id: ""
	I1206 10:10:38.901037  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.901046  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:38.901053  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:38.901121  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:38.931159  293728 cri.go:89] found id: ""
	I1206 10:10:38.931185  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.931194  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:38.931203  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:38.931214  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:38.945219  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:38.945247  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:39.040279  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:39.031608    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.032449    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034282    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034607    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.036094    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:39.031608    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.032449    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034282    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034607    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.036094    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:39.040303  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:39.040315  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:39.069669  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:39.069709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:39.102102  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:39.102133  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:41.662114  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:41.674379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:41.674461  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:41.700812  293728 cri.go:89] found id: ""
	I1206 10:10:41.700836  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.700846  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:41.700852  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:41.700945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:41.732717  293728 cri.go:89] found id: ""
	I1206 10:10:41.732744  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.732753  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:41.732759  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:41.732818  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:41.765582  293728 cri.go:89] found id: ""
	I1206 10:10:41.765609  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.765618  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:41.765624  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:41.765684  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:41.795133  293728 cri.go:89] found id: ""
	I1206 10:10:41.795160  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.795169  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:41.795178  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:41.795240  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:41.824848  293728 cri.go:89] found id: ""
	I1206 10:10:41.824876  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.824885  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:41.824894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:41.825002  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:41.850710  293728 cri.go:89] found id: ""
	I1206 10:10:41.850738  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.850748  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:41.850754  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:41.850817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:41.876689  293728 cri.go:89] found id: ""
	I1206 10:10:41.876714  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.876723  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:41.876730  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:41.876837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:41.910933  293728 cri.go:89] found id: ""
	I1206 10:10:41.910958  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.910967  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:41.910977  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:41.910988  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:41.940383  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:41.940411  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:42.002369  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:42.002465  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:42.036193  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:42.036220  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:42.116431  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:42.104500    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.106090    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.107160    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.108051    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.110987    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:42.104500    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.106090    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.107160    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.108051    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.110987    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:42.116466  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:42.116485  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:44.645750  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:44.657010  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:44.657087  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:44.681487  293728 cri.go:89] found id: ""
	I1206 10:10:44.681511  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.681520  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:44.681526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:44.681632  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:44.707007  293728 cri.go:89] found id: ""
	I1206 10:10:44.707032  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.707059  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:44.707065  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:44.707124  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:44.740358  293728 cri.go:89] found id: ""
	I1206 10:10:44.740384  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.740394  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:44.740400  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:44.740462  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:44.774979  293728 cri.go:89] found id: ""
	I1206 10:10:44.775005  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.775013  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:44.775020  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:44.775099  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:44.802733  293728 cri.go:89] found id: ""
	I1206 10:10:44.802759  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.802768  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:44.802774  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:44.802836  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:44.830059  293728 cri.go:89] found id: ""
	I1206 10:10:44.830082  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.830091  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:44.830104  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:44.830164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:44.857962  293728 cri.go:89] found id: ""
	I1206 10:10:44.857988  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.857997  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:44.858003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:44.858062  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:44.882971  293728 cri.go:89] found id: ""
	I1206 10:10:44.882993  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.883002  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:44.883011  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:44.883021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:44.939214  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:44.939249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:44.953046  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:44.953074  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:45.078537  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:45.068034    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069216    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069914    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.072098    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.073533    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:45.068034    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069216    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069914    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.072098    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.073533    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:45.078570  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:45.078586  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:45.108352  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:45.108392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:47.660188  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:47.670914  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:47.670992  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:47.695337  293728 cri.go:89] found id: ""
	I1206 10:10:47.695363  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.695417  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:47.695425  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:47.695496  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:47.728763  293728 cri.go:89] found id: ""
	I1206 10:10:47.728834  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.728855  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:47.728877  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:47.728982  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:47.755564  293728 cri.go:89] found id: ""
	I1206 10:10:47.755640  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.755663  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:47.755683  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:47.755794  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:47.786763  293728 cri.go:89] found id: ""
	I1206 10:10:47.786838  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.786869  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:47.786892  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:47.786999  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:47.813109  293728 cri.go:89] found id: ""
	I1206 10:10:47.813187  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.813209  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:47.813227  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:47.813312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:47.839872  293728 cri.go:89] found id: ""
	I1206 10:10:47.839947  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.839963  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:47.839971  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:47.840029  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:47.864803  293728 cri.go:89] found id: ""
	I1206 10:10:47.864827  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.864835  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:47.864842  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:47.864908  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:47.893715  293728 cri.go:89] found id: ""
	I1206 10:10:47.893740  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.893749  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:47.893759  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:47.893770  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:47.962240  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:47.954010    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.954579    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956159    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956626    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.958129    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:47.954010    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.954579    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956159    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956626    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.958129    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:47.962263  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:47.962275  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:47.988774  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:47.988808  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:48.022271  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:48.022301  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:48.088564  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:48.088601  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:50.605005  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:50.615765  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:50.615847  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:50.641365  293728 cri.go:89] found id: ""
	I1206 10:10:50.641389  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.641397  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:50.641404  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:50.641468  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:50.665749  293728 cri.go:89] found id: ""
	I1206 10:10:50.665775  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.665784  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:50.665790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:50.665848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:50.693092  293728 cri.go:89] found id: ""
	I1206 10:10:50.693117  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.693133  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:50.693139  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:50.693198  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:50.721292  293728 cri.go:89] found id: ""
	I1206 10:10:50.721319  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.721328  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:50.721335  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:50.721394  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:50.757580  293728 cri.go:89] found id: ""
	I1206 10:10:50.757608  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.757617  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:50.757623  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:50.757681  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:50.795246  293728 cri.go:89] found id: ""
	I1206 10:10:50.795275  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.795284  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:50.795290  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:50.795352  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:50.831466  293728 cri.go:89] found id: ""
	I1206 10:10:50.831489  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.831497  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:50.831503  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:50.831563  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:50.856692  293728 cri.go:89] found id: ""
	I1206 10:10:50.856719  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.856728  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:50.856737  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:50.856748  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:50.914369  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:50.914404  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:50.928218  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:50.928249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:51.001552  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:50.990416    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.991460    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.992543    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.993284    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.996113    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:50.990416    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.991460    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.992543    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.993284    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.996113    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:51.001649  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:51.001679  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:51.035670  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:51.035706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:53.568268  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:53.579523  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:53.579600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:53.605604  293728 cri.go:89] found id: ""
	I1206 10:10:53.605626  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.605636  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:53.605642  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:53.605704  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:53.632535  293728 cri.go:89] found id: ""
	I1206 10:10:53.632558  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.632566  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:53.632573  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:53.632633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:53.664459  293728 cri.go:89] found id: ""
	I1206 10:10:53.664485  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.664494  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:53.664500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:53.664561  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:53.689200  293728 cri.go:89] found id: ""
	I1206 10:10:53.689227  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.689235  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:53.689242  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:53.689303  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:53.724364  293728 cri.go:89] found id: ""
	I1206 10:10:53.724391  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.724401  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:53.724408  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:53.724489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:53.760957  293728 cri.go:89] found id: ""
	I1206 10:10:53.760985  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.760995  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:53.761002  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:53.761065  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:53.795256  293728 cri.go:89] found id: ""
	I1206 10:10:53.795417  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.795469  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:53.795490  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:53.795618  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:53.820946  293728 cri.go:89] found id: ""
	I1206 10:10:53.821014  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.821028  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:53.821038  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:53.821049  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:53.850603  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:53.850632  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:53.910568  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:53.910606  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:53.924408  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:53.924435  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:53.993865  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:53.984800    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.985669    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987623    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987938    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.989469    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:53.984800    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.985669    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987623    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987938    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.989469    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:53.993926  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:53.993964  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:56.525953  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:56.537170  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:56.537251  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:56.562800  293728 cri.go:89] found id: ""
	I1206 10:10:56.562825  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.562834  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:56.562841  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:56.562903  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:56.589000  293728 cri.go:89] found id: ""
	I1206 10:10:56.589032  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.589042  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:56.589048  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:56.589108  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:56.613252  293728 cri.go:89] found id: ""
	I1206 10:10:56.613276  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.613284  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:56.613291  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:56.613354  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:56.643136  293728 cri.go:89] found id: ""
	I1206 10:10:56.643176  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.643186  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:56.643193  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:56.643265  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:56.669515  293728 cri.go:89] found id: ""
	I1206 10:10:56.669539  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.669547  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:56.669554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:56.669613  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:56.694989  293728 cri.go:89] found id: ""
	I1206 10:10:56.695013  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.695022  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:56.695028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:56.695295  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:56.733872  293728 cri.go:89] found id: ""
	I1206 10:10:56.733898  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.733907  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:56.733914  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:56.733981  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:56.768700  293728 cri.go:89] found id: ""
	I1206 10:10:56.768725  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.768734  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:56.768745  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:56.768765  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:56.801786  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:56.801812  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:56.857425  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:56.857458  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:56.870898  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:56.870929  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:56.939737  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:56.930826    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.931761    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933321    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933912    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.935699    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:56.930826    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.931761    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933321    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933912    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.935699    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:56.939814  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:56.939833  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:59.467303  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:59.479788  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:59.479913  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:59.507178  293728 cri.go:89] found id: ""
	I1206 10:10:59.507214  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.507223  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:59.507229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:59.507307  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:59.532362  293728 cri.go:89] found id: ""
	I1206 10:10:59.532435  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.532460  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:59.532478  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:59.532565  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:59.561793  293728 cri.go:89] found id: ""
	I1206 10:10:59.561869  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.561893  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:59.561912  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:59.562006  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:59.587885  293728 cri.go:89] found id: ""
	I1206 10:10:59.587914  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.587933  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:59.587955  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:59.588043  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:59.616632  293728 cri.go:89] found id: ""
	I1206 10:10:59.616701  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.616723  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:59.616741  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:59.616828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:59.641907  293728 cri.go:89] found id: ""
	I1206 10:10:59.641942  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.641950  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:59.641957  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:59.642030  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:59.666146  293728 cri.go:89] found id: ""
	I1206 10:10:59.666181  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.666190  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:59.666197  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:59.666267  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:59.690454  293728 cri.go:89] found id: ""
	I1206 10:10:59.690525  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.690549  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:59.690571  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:59.690606  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:59.747565  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:59.747602  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:59.761979  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:59.762033  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:59.832718  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:59.824094    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825243    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825921    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827020    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827705    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:59.824094    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825243    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825921    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827020    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827705    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:59.832743  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:59.832755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:59.858330  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:59.858360  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:02.390395  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:02.401485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:02.401558  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:02.427611  293728 cri.go:89] found id: ""
	I1206 10:11:02.427638  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.427647  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:02.427654  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:02.427729  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:02.454049  293728 cri.go:89] found id: ""
	I1206 10:11:02.454078  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.454087  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:02.454093  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:02.454154  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:02.480392  293728 cri.go:89] found id: ""
	I1206 10:11:02.480417  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.480425  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:02.480431  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:02.480489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:02.506546  293728 cri.go:89] found id: ""
	I1206 10:11:02.506572  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.506581  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:02.506587  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:02.506647  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:02.531917  293728 cri.go:89] found id: ""
	I1206 10:11:02.531954  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.531963  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:02.531979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:02.532097  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:02.559738  293728 cri.go:89] found id: ""
	I1206 10:11:02.559759  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.559768  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:02.559774  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:02.559834  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:02.584556  293728 cri.go:89] found id: ""
	I1206 10:11:02.584578  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.584587  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:02.584593  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:02.584652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:02.617108  293728 cri.go:89] found id: ""
	I1206 10:11:02.617164  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.617174  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:02.617183  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:02.617199  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:02.645764  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:02.645802  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:02.675285  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:02.675317  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:02.733222  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:02.733262  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:02.747026  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:02.747069  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:02.827017  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:02.817993    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.818819    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.820650    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.821248    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.822937    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:02.817993    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.818819    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.820650    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.821248    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.822937    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:05.327889  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:05.338718  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:05.338812  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:05.363857  293728 cri.go:89] found id: ""
	I1206 10:11:05.363882  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.363892  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:05.363899  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:05.363969  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:05.389419  293728 cri.go:89] found id: ""
	I1206 10:11:05.389444  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.389453  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:05.389462  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:05.389522  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:05.416875  293728 cri.go:89] found id: ""
	I1206 10:11:05.416937  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.416952  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:05.416960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:05.417018  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:05.445294  293728 cri.go:89] found id: ""
	I1206 10:11:05.445316  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.445325  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:05.445331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:05.445389  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:05.469930  293728 cri.go:89] found id: ""
	I1206 10:11:05.469952  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.469960  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:05.469966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:05.470023  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:05.494527  293728 cri.go:89] found id: ""
	I1206 10:11:05.494591  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.494623  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:05.494641  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:05.494712  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:05.519703  293728 cri.go:89] found id: ""
	I1206 10:11:05.519727  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.519736  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:05.519742  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:05.519802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:05.544697  293728 cri.go:89] found id: ""
	I1206 10:11:05.544721  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.544729  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:05.544738  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:05.544751  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:05.558261  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:05.558288  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:05.627696  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:05.618572   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.619577   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.621405   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.622011   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.623059   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:05.618572   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.619577   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.621405   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.622011   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.623059   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:05.627760  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:05.627781  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:05.653464  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:05.653499  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:05.684619  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:05.684647  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:08.247509  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:08.260609  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:08.260730  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:08.289483  293728 cri.go:89] found id: ""
	I1206 10:11:08.289551  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.289567  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:08.289580  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:08.289640  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:08.318013  293728 cri.go:89] found id: ""
	I1206 10:11:08.318037  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.318045  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:08.318051  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:08.318110  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:08.351762  293728 cri.go:89] found id: ""
	I1206 10:11:08.351785  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.351794  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:08.351800  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:08.351858  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:08.377083  293728 cri.go:89] found id: ""
	I1206 10:11:08.377159  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.377174  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:08.377181  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:08.377240  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:08.406041  293728 cri.go:89] found id: ""
	I1206 10:11:08.406063  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.406072  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:08.406077  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:08.406135  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:08.430970  293728 cri.go:89] found id: ""
	I1206 10:11:08.430996  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.431004  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:08.431011  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:08.431096  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:08.454833  293728 cri.go:89] found id: ""
	I1206 10:11:08.454857  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.454865  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:08.454872  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:08.454931  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:08.484046  293728 cri.go:89] found id: ""
	I1206 10:11:08.484113  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.484129  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:08.484139  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:08.484150  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:08.551224  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:08.542554   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.543265   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545049   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545727   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.547350   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:08.542554   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.543265   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545049   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545727   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.547350   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:08.551247  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:08.551259  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:08.577706  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:08.577740  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:08.605435  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:08.605462  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:08.665984  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:08.666020  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:11.180758  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:11.193428  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:11.193501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:11.230343  293728 cri.go:89] found id: ""
	I1206 10:11:11.230374  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.230383  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:11.230389  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:11.230452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:11.267153  293728 cri.go:89] found id: ""
	I1206 10:11:11.267177  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.267187  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:11.267193  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:11.267258  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:11.299679  293728 cri.go:89] found id: ""
	I1206 10:11:11.299708  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.299718  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:11.299724  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:11.299784  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:11.325476  293728 cri.go:89] found id: ""
	I1206 10:11:11.325503  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.325512  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:11.325518  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:11.325600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:11.351586  293728 cri.go:89] found id: ""
	I1206 10:11:11.351614  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.351624  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:11.351632  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:11.351700  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:11.377176  293728 cri.go:89] found id: ""
	I1206 10:11:11.377203  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.377212  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:11.377219  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:11.377308  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:11.402618  293728 cri.go:89] found id: ""
	I1206 10:11:11.402644  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.402652  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:11.402659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:11.402745  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:11.429503  293728 cri.go:89] found id: ""
	I1206 10:11:11.429529  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.429538  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:11.429547  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:11.429562  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:11.486599  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:11.486638  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:11.500957  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:11.500987  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:11.577987  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:11.568882   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.569760   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.571647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.572318   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.573801   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:11.568882   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.569760   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.571647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.572318   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.573801   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:11.578008  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:11.578021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:11.604993  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:11.605027  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:14.137875  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:14.148737  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:14.148811  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:14.173594  293728 cri.go:89] found id: ""
	I1206 10:11:14.173671  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.173695  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:14.173714  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:14.173809  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:14.200007  293728 cri.go:89] found id: ""
	I1206 10:11:14.200033  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.200043  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:14.200050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:14.200117  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:14.233924  293728 cri.go:89] found id: ""
	I1206 10:11:14.233951  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.233959  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:14.233966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:14.234030  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:14.264436  293728 cri.go:89] found id: ""
	I1206 10:11:14.264464  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.264474  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:14.264480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:14.264540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:14.292320  293728 cri.go:89] found id: ""
	I1206 10:11:14.292348  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.292359  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:14.292365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:14.292426  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:14.317612  293728 cri.go:89] found id: ""
	I1206 10:11:14.317640  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.317649  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:14.317656  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:14.317714  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:14.342496  293728 cri.go:89] found id: ""
	I1206 10:11:14.342521  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.342530  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:14.342536  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:14.342596  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:14.368247  293728 cri.go:89] found id: ""
	I1206 10:11:14.368273  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.368282  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:14.368292  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:14.368304  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:14.394942  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:14.394976  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:14.428315  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:14.428345  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:14.484824  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:14.484855  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:14.498675  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:14.498705  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:14.568051  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:14.559253   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.560001   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.561736   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.562345   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.564094   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:14.559253   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.560001   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.561736   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.562345   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.564094   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:17.068293  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:17.078902  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:17.078976  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:17.103674  293728 cri.go:89] found id: ""
	I1206 10:11:17.103699  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.103708  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:17.103715  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:17.103777  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:17.139412  293728 cri.go:89] found id: ""
	I1206 10:11:17.139481  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.139503  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:17.139523  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:17.139610  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:17.168435  293728 cri.go:89] found id: ""
	I1206 10:11:17.168461  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.168470  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:17.168476  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:17.168568  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:17.198788  293728 cri.go:89] found id: ""
	I1206 10:11:17.198854  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.198879  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:17.198898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:17.198983  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:17.233132  293728 cri.go:89] found id: ""
	I1206 10:11:17.233218  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.233242  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:17.233262  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:17.233356  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:17.268547  293728 cri.go:89] found id: ""
	I1206 10:11:17.268613  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.268637  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:17.268655  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:17.268741  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:17.303935  293728 cri.go:89] found id: ""
	I1206 10:11:17.303957  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.303966  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:17.303972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:17.304032  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:17.328050  293728 cri.go:89] found id: ""
	I1206 10:11:17.328074  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.328084  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:17.328092  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:17.328139  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:17.387715  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:17.387750  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:17.401545  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:17.401576  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:17.467905  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:17.459187   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.459639   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461308   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461736   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.463309   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:17.459187   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.459639   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461308   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461736   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.463309   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:17.467927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:17.467939  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:17.493972  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:17.494007  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:20.027522  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:20.040220  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:20.040323  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:20.068566  293728 cri.go:89] found id: ""
	I1206 10:11:20.068592  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.068602  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:20.068610  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:20.068691  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:20.096577  293728 cri.go:89] found id: ""
	I1206 10:11:20.096616  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.096626  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:20.096633  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:20.096791  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:20.125150  293728 cri.go:89] found id: ""
	I1206 10:11:20.125175  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.125185  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:20.125192  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:20.125253  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:20.151199  293728 cri.go:89] found id: ""
	I1206 10:11:20.151225  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.151234  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:20.151241  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:20.151303  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:20.177323  293728 cri.go:89] found id: ""
	I1206 10:11:20.177349  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.177359  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:20.177365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:20.177454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:20.207914  293728 cri.go:89] found id: ""
	I1206 10:11:20.207940  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.207950  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:20.207956  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:20.208015  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:20.250213  293728 cri.go:89] found id: ""
	I1206 10:11:20.250247  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.250256  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:20.250265  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:20.250336  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:20.284320  293728 cri.go:89] found id: ""
	I1206 10:11:20.284356  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.284365  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:20.284374  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:20.284384  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:20.317496  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:20.317524  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:20.373988  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:20.374021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:20.387702  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:20.387728  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:20.454347  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:20.446421   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.447014   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448572   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448979   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.450465   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:20.446421   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.447014   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448572   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448979   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.450465   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:20.454370  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:20.454383  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:22.980202  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:22.991835  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:22.991961  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:23.022299  293728 cri.go:89] found id: ""
	I1206 10:11:23.022379  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.022404  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:23.022423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:23.022532  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:23.055611  293728 cri.go:89] found id: ""
	I1206 10:11:23.055634  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.055643  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:23.055649  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:23.055708  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:23.080752  293728 cri.go:89] found id: ""
	I1206 10:11:23.080828  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.080850  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:23.080870  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:23.080965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:23.106107  293728 cri.go:89] found id: ""
	I1206 10:11:23.106134  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.106143  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:23.106150  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:23.106212  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:23.132303  293728 cri.go:89] found id: ""
	I1206 10:11:23.132327  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.132335  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:23.132342  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:23.132408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:23.156632  293728 cri.go:89] found id: ""
	I1206 10:11:23.156697  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.156712  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:23.156719  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:23.156775  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:23.180697  293728 cri.go:89] found id: ""
	I1206 10:11:23.180764  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.180777  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:23.180784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:23.180842  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:23.208267  293728 cri.go:89] found id: ""
	I1206 10:11:23.208341  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.208364  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:23.208387  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:23.208425  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:23.292598  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:23.283687   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.284573   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.286441   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.287115   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.288724   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:23.283687   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.284573   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.286441   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.287115   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.288724   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:23.292618  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:23.292631  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:23.318604  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:23.318641  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:23.352649  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:23.352676  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:23.411769  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:23.411803  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:25.925870  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:25.936619  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:25.936701  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:25.963699  293728 cri.go:89] found id: ""
	I1206 10:11:25.963722  293728 logs.go:282] 0 containers: []
	W1206 10:11:25.963731  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:25.963738  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:25.963802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:25.995991  293728 cri.go:89] found id: ""
	I1206 10:11:25.996066  293728 logs.go:282] 0 containers: []
	W1206 10:11:25.996088  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:25.996106  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:25.996196  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:26.030700  293728 cri.go:89] found id: ""
	I1206 10:11:26.030728  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.030738  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:26.030745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:26.030809  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:26.066012  293728 cri.go:89] found id: ""
	I1206 10:11:26.066044  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.066054  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:26.066060  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:26.066125  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:26.092723  293728 cri.go:89] found id: ""
	I1206 10:11:26.092753  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.092763  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:26.092769  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:26.092837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:26.120031  293728 cri.go:89] found id: ""
	I1206 10:11:26.120108  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.120125  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:26.120132  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:26.120198  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:26.147104  293728 cri.go:89] found id: ""
	I1206 10:11:26.147131  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.147152  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:26.147158  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:26.147257  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:26.173188  293728 cri.go:89] found id: ""
	I1206 10:11:26.173212  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.173221  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:26.173230  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:26.173273  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:26.259536  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:26.250765   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.251710   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253385   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253690   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.255208   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:26.250765   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.251710   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253385   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253690   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.255208   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:26.259581  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:26.259596  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:26.288770  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:26.288853  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:26.318991  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:26.319082  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:26.377710  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:26.377743  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:28.892920  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:28.903557  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:28.903622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:28.928667  293728 cri.go:89] found id: ""
	I1206 10:11:28.928691  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.928699  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:28.928707  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:28.928767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:28.953528  293728 cri.go:89] found id: ""
	I1206 10:11:28.953554  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.953562  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:28.953568  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:28.953626  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:28.981995  293728 cri.go:89] found id: ""
	I1206 10:11:28.982022  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.982031  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:28.982037  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:28.982101  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:29.021133  293728 cri.go:89] found id: ""
	I1206 10:11:29.021161  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.021170  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:29.021177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:29.021244  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:29.051961  293728 cri.go:89] found id: ""
	I1206 10:11:29.052044  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.052056  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:29.052063  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:29.052157  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:29.076239  293728 cri.go:89] found id: ""
	I1206 10:11:29.076260  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.076268  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:29.076274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:29.076331  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:29.100533  293728 cri.go:89] found id: ""
	I1206 10:11:29.100568  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.100577  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:29.100583  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:29.100642  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:29.125877  293728 cri.go:89] found id: ""
	I1206 10:11:29.125900  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.125909  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:29.125917  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:29.125929  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:29.184407  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:29.184441  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:29.198478  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:29.198553  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:29.291075  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:29.280844   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.281788   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285131   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285582   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.287240   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:29.280844   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.281788   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285131   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285582   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.287240   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:29.291096  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:29.291109  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:29.317026  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:29.317059  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:31.845985  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:31.857066  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:31.857145  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:31.882982  293728 cri.go:89] found id: ""
	I1206 10:11:31.883059  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.883081  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:31.883101  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:31.883187  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:31.908108  293728 cri.go:89] found id: ""
	I1206 10:11:31.908138  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.908148  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:31.908154  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:31.908244  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:31.933164  293728 cri.go:89] found id: ""
	I1206 10:11:31.933188  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.933197  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:31.933204  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:31.933261  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:31.961760  293728 cri.go:89] found id: ""
	I1206 10:11:31.961784  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.961792  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:31.961798  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:31.961864  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:31.993806  293728 cri.go:89] found id: ""
	I1206 10:11:31.993836  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.993845  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:31.993851  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:31.993915  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:32.025453  293728 cri.go:89] found id: ""
	I1206 10:11:32.025480  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.025489  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:32.025496  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:32.025556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:32.053138  293728 cri.go:89] found id: ""
	I1206 10:11:32.053160  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.053171  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:32.053177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:32.053236  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:32.084984  293728 cri.go:89] found id: ""
	I1206 10:11:32.085009  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.085018  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:32.085027  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:32.085058  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:32.113246  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:32.113276  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:32.170516  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:32.170553  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:32.184767  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:32.184797  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:32.266194  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:32.257320   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.258649   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.259490   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.260223   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.261917   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:32.257320   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.258649   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.259490   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.260223   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.261917   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:32.266261  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:32.266289  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:34.798474  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:34.809168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:34.809239  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:34.837292  293728 cri.go:89] found id: ""
	I1206 10:11:34.837314  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.837322  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:34.837329  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:34.837387  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:34.863331  293728 cri.go:89] found id: ""
	I1206 10:11:34.863353  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.863362  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:34.863369  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:34.863465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:34.893355  293728 cri.go:89] found id: ""
	I1206 10:11:34.893379  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.893388  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:34.893395  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:34.893452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:34.919127  293728 cri.go:89] found id: ""
	I1206 10:11:34.919153  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.919162  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:34.919169  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:34.919228  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:34.948423  293728 cri.go:89] found id: ""
	I1206 10:11:34.948448  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.948458  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:34.948467  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:34.948526  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:34.984476  293728 cri.go:89] found id: ""
	I1206 10:11:34.984503  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.984513  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:34.984520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:34.984579  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:35.017804  293728 cri.go:89] found id: ""
	I1206 10:11:35.017831  293728 logs.go:282] 0 containers: []
	W1206 10:11:35.017840  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:35.017847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:35.017955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:35.049243  293728 cri.go:89] found id: ""
	I1206 10:11:35.049270  293728 logs.go:282] 0 containers: []
	W1206 10:11:35.049279  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:35.049288  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:35.049300  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:35.109333  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:35.109371  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:35.123612  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:35.123643  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:35.191474  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:35.181616   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.182533   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184226   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184809   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.186401   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:35.181616   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.182533   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184226   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184809   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.186401   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:35.191495  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:35.191509  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:35.217926  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:35.218007  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:37.758372  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:37.769553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:37.769625  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:37.799573  293728 cri.go:89] found id: ""
	I1206 10:11:37.799606  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.799617  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:37.799626  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:37.799697  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:37.828542  293728 cri.go:89] found id: ""
	I1206 10:11:37.828580  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.828589  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:37.828595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:37.828670  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:37.854197  293728 cri.go:89] found id: ""
	I1206 10:11:37.854223  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.854233  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:37.854239  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:37.854299  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:37.879147  293728 cri.go:89] found id: ""
	I1206 10:11:37.879220  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.879243  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:37.879261  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:37.879346  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:37.905390  293728 cri.go:89] found id: ""
	I1206 10:11:37.905412  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.905421  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:37.905428  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:37.905533  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:37.933187  293728 cri.go:89] found id: ""
	I1206 10:11:37.933251  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.933266  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:37.933273  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:37.933333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:37.957719  293728 cri.go:89] found id: ""
	I1206 10:11:37.957743  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.957756  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:37.957763  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:37.957823  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:37.991726  293728 cri.go:89] found id: ""
	I1206 10:11:37.991755  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.991765  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:37.991775  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:37.991787  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:38.072266  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:38.063102   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.063715   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.065465   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.066011   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.067888   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:38.063102   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.063715   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.065465   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.066011   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.067888   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:38.072293  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:38.072308  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:38.100264  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:38.100302  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:38.128959  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:38.128989  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:38.186487  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:38.186517  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:40.700896  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:40.711768  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:40.711841  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:40.737641  293728 cri.go:89] found id: ""
	I1206 10:11:40.737664  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.737675  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:40.737681  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:40.737740  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:40.763410  293728 cri.go:89] found id: ""
	I1206 10:11:40.763437  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.763447  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:40.763453  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:40.763521  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:40.788254  293728 cri.go:89] found id: ""
	I1206 10:11:40.788277  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.788287  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:40.788293  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:40.788351  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:40.812429  293728 cri.go:89] found id: ""
	I1206 10:11:40.812454  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.812464  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:40.812470  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:40.812577  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:40.836598  293728 cri.go:89] found id: ""
	I1206 10:11:40.836623  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.836632  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:40.836639  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:40.836699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:40.865558  293728 cri.go:89] found id: ""
	I1206 10:11:40.865584  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.865593  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:40.865600  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:40.865658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:40.890394  293728 cri.go:89] found id: ""
	I1206 10:11:40.890419  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.890428  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:40.890434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:40.890494  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:40.919443  293728 cri.go:89] found id: ""
	I1206 10:11:40.919471  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.919480  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:40.919489  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:40.919501  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:40.932761  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:40.932788  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:41.018904  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:41.007696   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.008625   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.010702   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.011857   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.013002   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:41.007696   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.008625   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.010702   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.011857   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.013002   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:41.018927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:41.018942  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:41.049613  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:41.049648  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:41.077525  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:41.077552  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:43.637314  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:43.648009  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:43.648084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:43.673268  293728 cri.go:89] found id: ""
	I1206 10:11:43.673291  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.673299  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:43.673306  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:43.673363  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:43.698533  293728 cri.go:89] found id: ""
	I1206 10:11:43.698563  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.698573  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:43.698579  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:43.698666  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:43.726409  293728 cri.go:89] found id: ""
	I1206 10:11:43.726434  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.726443  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:43.726449  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:43.726524  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:43.753336  293728 cri.go:89] found id: ""
	I1206 10:11:43.753361  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.753371  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:43.753377  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:43.753468  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:43.778503  293728 cri.go:89] found id: ""
	I1206 10:11:43.778526  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.778535  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:43.778541  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:43.778622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:43.806530  293728 cri.go:89] found id: ""
	I1206 10:11:43.806554  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.806564  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:43.806570  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:43.806652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:43.831543  293728 cri.go:89] found id: ""
	I1206 10:11:43.831570  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.831579  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:43.831585  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:43.831644  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:43.856767  293728 cri.go:89] found id: ""
	I1206 10:11:43.856791  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.856800  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:43.856808  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:43.856821  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:43.926714  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:43.918532   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.919086   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.920754   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.921218   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.922816   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:43.918532   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.919086   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.920754   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.921218   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.922816   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:43.926736  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:43.926751  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:43.953140  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:43.953176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:43.986579  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:43.986611  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:44.046797  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:44.046832  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:46.561087  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:46.574475  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:46.574548  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:46.603568  293728 cri.go:89] found id: ""
	I1206 10:11:46.603593  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.603601  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:46.603608  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:46.603688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:46.629999  293728 cri.go:89] found id: ""
	I1206 10:11:46.630024  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.630034  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:46.630040  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:46.630120  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:46.657373  293728 cri.go:89] found id: ""
	I1206 10:11:46.657399  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.657408  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:46.657414  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:46.657472  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:46.682131  293728 cri.go:89] found id: ""
	I1206 10:11:46.682157  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.682166  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:46.682172  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:46.682229  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:46.712112  293728 cri.go:89] found id: ""
	I1206 10:11:46.712184  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.712201  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:46.712209  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:46.712273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:46.737272  293728 cri.go:89] found id: ""
	I1206 10:11:46.737308  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.737317  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:46.737323  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:46.737402  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:46.762747  293728 cri.go:89] found id: ""
	I1206 10:11:46.762773  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.762782  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:46.762814  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:46.762904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:46.789056  293728 cri.go:89] found id: ""
	I1206 10:11:46.789092  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.789101  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:46.789110  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:46.789122  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:46.852031  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:46.843591   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.844469   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846096   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846414   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.847930   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:46.843591   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.844469   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846096   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846414   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.847930   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:46.852055  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:46.852068  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:46.878458  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:46.878490  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:46.909497  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:46.909523  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:46.966671  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:46.966706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:49.484723  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:49.499040  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:49.499143  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:49.530152  293728 cri.go:89] found id: ""
	I1206 10:11:49.530195  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.530204  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:49.530228  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:49.530311  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:49.556277  293728 cri.go:89] found id: ""
	I1206 10:11:49.556302  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.556311  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:49.556317  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:49.556422  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:49.582278  293728 cri.go:89] found id: ""
	I1206 10:11:49.582303  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.582312  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:49.582318  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:49.582386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:49.608504  293728 cri.go:89] found id: ""
	I1206 10:11:49.608529  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.608538  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:49.608544  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:49.608624  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:49.633347  293728 cri.go:89] found id: ""
	I1206 10:11:49.633414  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.633429  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:49.633436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:49.633495  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:49.658195  293728 cri.go:89] found id: ""
	I1206 10:11:49.658223  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.658233  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:49.658240  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:49.658297  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:49.691086  293728 cri.go:89] found id: ""
	I1206 10:11:49.691112  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.691122  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:49.691128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:49.691213  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:49.716625  293728 cri.go:89] found id: ""
	I1206 10:11:49.716652  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.716661  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:49.716669  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:49.716684  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:49.778048  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:49.778093  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:49.792187  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:49.792216  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:49.858528  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:49.849703   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.850362   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852120   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852678   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.854314   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:49.849703   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.850362   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852120   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852678   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.854314   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:49.858551  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:49.858566  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:49.884659  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:49.884691  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:52.413397  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:52.424250  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:52.424322  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:52.454481  293728 cri.go:89] found id: ""
	I1206 10:11:52.454557  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.454573  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:52.454581  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:52.454642  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:52.487281  293728 cri.go:89] found id: ""
	I1206 10:11:52.487315  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.487325  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:52.487331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:52.487408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:52.522975  293728 cri.go:89] found id: ""
	I1206 10:11:52.523008  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.523025  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:52.523032  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:52.523102  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:52.557389  293728 cri.go:89] found id: ""
	I1206 10:11:52.557421  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.557430  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:52.557436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:52.557494  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:52.583449  293728 cri.go:89] found id: ""
	I1206 10:11:52.583474  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.583483  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:52.583490  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:52.583608  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:52.608370  293728 cri.go:89] found id: ""
	I1206 10:11:52.608412  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.608422  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:52.608429  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:52.608499  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:52.637950  293728 cri.go:89] found id: ""
	I1206 10:11:52.638026  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.638051  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:52.638069  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:52.638160  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:52.663271  293728 cri.go:89] found id: ""
	I1206 10:11:52.663349  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.663413  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:52.663443  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:52.663464  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:52.721303  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:52.721339  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:52.735517  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:52.735548  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:52.806629  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:52.798101   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.799086   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800264   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800722   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.802387   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:52.798101   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.799086   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800264   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800722   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.802387   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:52.806652  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:52.806666  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:52.834909  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:52.834944  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:55.365104  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:55.376039  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:55.376112  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:55.401088  293728 cri.go:89] found id: ""
	I1206 10:11:55.401114  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.401123  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:55.401130  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:55.401187  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:55.426712  293728 cri.go:89] found id: ""
	I1206 10:11:55.426735  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.426744  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:55.426752  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:55.426808  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:55.453355  293728 cri.go:89] found id: ""
	I1206 10:11:55.453433  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.453449  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:55.453456  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:55.453524  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:55.482694  293728 cri.go:89] found id: ""
	I1206 10:11:55.482786  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.482809  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:55.482831  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:55.482965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:55.517524  293728 cri.go:89] found id: ""
	I1206 10:11:55.517567  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.517576  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:55.517582  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:55.517651  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:55.552808  293728 cri.go:89] found id: ""
	I1206 10:11:55.552887  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.552919  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:55.552943  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:55.553051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:55.582318  293728 cri.go:89] found id: ""
	I1206 10:11:55.582391  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.582413  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:55.582435  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:55.582545  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:55.611979  293728 cri.go:89] found id: ""
	I1206 10:11:55.612012  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.612021  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:55.612030  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:55.612043  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:55.641663  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:55.641691  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:55.699247  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:55.699281  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:55.714284  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:55.714312  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:55.779980  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:55.771718   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.772511   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774153   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774506   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.776084   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:55.771718   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.772511   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774153   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774506   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.776084   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:55.780002  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:55.780020  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:58.307533  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:58.318444  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:58.318517  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:58.346128  293728 cri.go:89] found id: ""
	I1206 10:11:58.346181  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.346194  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:58.346202  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:58.346276  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:58.370957  293728 cri.go:89] found id: ""
	I1206 10:11:58.370992  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.371001  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:58.371013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:58.371093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:58.397685  293728 cri.go:89] found id: ""
	I1206 10:11:58.397717  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.397726  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:58.397732  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:58.397803  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:58.426933  293728 cri.go:89] found id: ""
	I1206 10:11:58.426959  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.426967  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:58.426973  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:58.427051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:58.456330  293728 cri.go:89] found id: ""
	I1206 10:11:58.456365  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.456375  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:58.456381  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:58.456448  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:58.494975  293728 cri.go:89] found id: ""
	I1206 10:11:58.495018  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.495027  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:58.495034  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:58.495106  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:58.532346  293728 cri.go:89] found id: ""
	I1206 10:11:58.532379  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.532389  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:58.532395  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:58.532465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:58.558540  293728 cri.go:89] found id: ""
	I1206 10:11:58.558576  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.558584  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:58.558593  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:58.558605  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:58.573220  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:58.573249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:58.639437  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:58.631044   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.631569   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633054   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633435   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.634868   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:58.631044   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.631569   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633054   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633435   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.634868   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:58.639512  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:58.639535  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:58.664823  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:58.664861  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:58.692934  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:58.692966  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:01.250858  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:01.262935  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:01.263112  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:01.291081  293728 cri.go:89] found id: ""
	I1206 10:12:01.291107  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.291117  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:01.291123  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:01.291204  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:01.318105  293728 cri.go:89] found id: ""
	I1206 10:12:01.318138  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.318147  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:01.318168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:01.318249  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:01.344419  293728 cri.go:89] found id: ""
	I1206 10:12:01.344488  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.344514  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:01.344528  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:01.344601  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:01.370652  293728 cri.go:89] found id: ""
	I1206 10:12:01.370677  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.370686  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:01.370693  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:01.370751  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:01.397501  293728 cri.go:89] found id: ""
	I1206 10:12:01.397528  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.397538  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:01.397544  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:01.397603  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:01.423444  293728 cri.go:89] found id: ""
	I1206 10:12:01.423517  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.423541  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:01.423563  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:01.423646  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:01.453268  293728 cri.go:89] found id: ""
	I1206 10:12:01.453294  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.453303  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:01.453316  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:01.453417  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:01.481810  293728 cri.go:89] found id: ""
	I1206 10:12:01.481890  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.481915  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:01.481932  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:01.481959  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:01.538994  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:01.539079  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:01.553293  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:01.553320  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:01.623989  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:01.612749   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.615513   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.616460   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618024   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618347   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:01.612749   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.615513   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.616460   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618024   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618347   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:01.624063  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:01.624085  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:01.649724  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:01.649757  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:04.179886  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:04.191201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:04.191273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:04.216964  293728 cri.go:89] found id: ""
	I1206 10:12:04.217045  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.217065  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:04.217072  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:04.217168  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:04.252840  293728 cri.go:89] found id: ""
	I1206 10:12:04.252875  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.252884  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:04.252891  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:04.252965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:04.281583  293728 cri.go:89] found id: ""
	I1206 10:12:04.281614  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.281623  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:04.281629  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:04.281695  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:04.311479  293728 cri.go:89] found id: ""
	I1206 10:12:04.311547  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.311571  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:04.311585  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:04.311658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:04.337184  293728 cri.go:89] found id: ""
	I1206 10:12:04.337213  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.337221  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:04.337228  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:04.337307  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:04.363672  293728 cri.go:89] found id: ""
	I1206 10:12:04.363705  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.363715  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:04.363738  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:04.363836  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:04.394214  293728 cri.go:89] found id: ""
	I1206 10:12:04.394240  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.394249  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:04.394256  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:04.394367  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:04.419254  293728 cri.go:89] found id: ""
	I1206 10:12:04.419335  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.419359  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:04.419403  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:04.419437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:04.451555  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:04.451582  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:04.509304  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:04.509336  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:04.523821  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:04.523848  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:04.591566  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:04.581768   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.583295   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.584171   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.585971   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.586453   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:04.581768   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.583295   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.584171   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.585971   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.586453   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:04.591591  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:04.591604  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:07.121570  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:07.132505  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:07.132585  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:07.157021  293728 cri.go:89] found id: ""
	I1206 10:12:07.157047  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.157056  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:07.157063  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:07.157151  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:07.182478  293728 cri.go:89] found id: ""
	I1206 10:12:07.182510  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.182519  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:07.182526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:07.182597  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:07.212401  293728 cri.go:89] found id: ""
	I1206 10:12:07.212424  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.212433  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:07.212439  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:07.212498  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:07.246228  293728 cri.go:89] found id: ""
	I1206 10:12:07.246255  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.246264  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:07.246271  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:07.246333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:07.273777  293728 cri.go:89] found id: ""
	I1206 10:12:07.273802  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.273811  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:07.273817  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:07.273878  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:07.302425  293728 cri.go:89] found id: ""
	I1206 10:12:07.302464  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.302473  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:07.302481  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:07.302556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:07.328379  293728 cri.go:89] found id: ""
	I1206 10:12:07.328403  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.328412  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:07.328418  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:07.328476  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:07.358727  293728 cri.go:89] found id: ""
	I1206 10:12:07.358751  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.358760  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:07.358771  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:07.358811  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:07.415522  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:07.415561  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:07.429309  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:07.429338  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:07.497723  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:07.488450   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.488945   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.490709   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.491285   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.492907   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:07.488450   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.488945   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.490709   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.491285   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.492907   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:07.497749  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:07.497762  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:07.524612  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:07.524648  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:10.055528  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:10.066871  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:10.066968  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:10.092582  293728 cri.go:89] found id: ""
	I1206 10:12:10.092611  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.092622  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:10.092630  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:10.092695  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:10.120230  293728 cri.go:89] found id: ""
	I1206 10:12:10.120321  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.120347  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:10.120366  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:10.120465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:10.146387  293728 cri.go:89] found id: ""
	I1206 10:12:10.146464  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.146489  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:10.146508  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:10.146582  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:10.173457  293728 cri.go:89] found id: ""
	I1206 10:12:10.173484  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.173493  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:10.173500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:10.173592  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:10.202187  293728 cri.go:89] found id: ""
	I1206 10:12:10.202262  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.202285  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:10.202303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:10.202393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:10.232838  293728 cri.go:89] found id: ""
	I1206 10:12:10.232901  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.232922  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:10.232940  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:10.233025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:10.267445  293728 cri.go:89] found id: ""
	I1206 10:12:10.267520  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.267543  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:10.267561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:10.267650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:10.298314  293728 cri.go:89] found id: ""
	I1206 10:12:10.298389  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.298412  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:10.298434  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:10.298472  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:10.325341  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:10.325374  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:10.385049  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:10.385081  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:10.398513  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:10.398540  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:10.463844  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:10.454441   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.455251   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457119   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457874   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.459632   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:10.454441   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.455251   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457119   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457874   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.459632   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:10.463908  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:10.463945  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:12.991294  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:13.006571  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:13.006645  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:13.040431  293728 cri.go:89] found id: ""
	I1206 10:12:13.040457  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.040466  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:13.040479  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:13.040544  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:13.066025  293728 cri.go:89] found id: ""
	I1206 10:12:13.066047  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.066056  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:13.066062  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:13.066134  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:13.093459  293728 cri.go:89] found id: ""
	I1206 10:12:13.093482  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.093491  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:13.093496  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:13.093556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:13.118066  293728 cri.go:89] found id: ""
	I1206 10:12:13.118089  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.118098  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:13.118104  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:13.118162  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:13.145619  293728 cri.go:89] found id: ""
	I1206 10:12:13.145685  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.145704  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:13.145711  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:13.145770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:13.174833  293728 cri.go:89] found id: ""
	I1206 10:12:13.174857  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.174866  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:13.174872  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:13.174934  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:13.200490  293728 cri.go:89] found id: ""
	I1206 10:12:13.200517  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.200526  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:13.200532  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:13.200590  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:13.243683  293728 cri.go:89] found id: ""
	I1206 10:12:13.243709  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.243718  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:13.243726  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:13.243741  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:13.279303  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:13.279330  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:13.337861  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:13.337897  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:13.351559  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:13.351634  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:13.413990  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:13.406460   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.406956   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408410   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408802   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.410225   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:13.406460   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.406956   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408410   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408802   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.410225   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:13.414012  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:13.414028  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:15.940438  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:15.952379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:15.952452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:15.997713  293728 cri.go:89] found id: ""
	I1206 10:12:15.997741  293728 logs.go:282] 0 containers: []
	W1206 10:12:15.997749  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:15.997755  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:15.997814  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:16.027447  293728 cri.go:89] found id: ""
	I1206 10:12:16.027477  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.027486  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:16.027494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:16.027552  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:16.056201  293728 cri.go:89] found id: ""
	I1206 10:12:16.056224  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.056232  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:16.056238  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:16.056296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:16.080619  293728 cri.go:89] found id: ""
	I1206 10:12:16.080641  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.080650  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:16.080657  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:16.080736  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:16.106294  293728 cri.go:89] found id: ""
	I1206 10:12:16.106316  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.106324  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:16.106330  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:16.106393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:16.131999  293728 cri.go:89] found id: ""
	I1206 10:12:16.132026  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.132036  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:16.132042  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:16.132103  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:16.156693  293728 cri.go:89] found id: ""
	I1206 10:12:16.156719  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.156734  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:16.156740  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:16.156819  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:16.182391  293728 cri.go:89] found id: ""
	I1206 10:12:16.182416  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.182426  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:16.182436  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:16.182467  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:16.262961  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:16.251126   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.252302   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.253220   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257326   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257858   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:16.251126   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.252302   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.253220   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257326   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257858   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:16.262991  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:16.263024  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:16.292146  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:16.292180  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:16.323803  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:16.323830  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:16.382496  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:16.382530  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:18.896413  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:18.906898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:18.907007  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:18.930731  293728 cri.go:89] found id: ""
	I1206 10:12:18.930763  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.930773  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:18.930779  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:18.930844  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:18.955309  293728 cri.go:89] found id: ""
	I1206 10:12:18.955334  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.955343  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:18.955349  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:18.955428  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:18.987453  293728 cri.go:89] found id: ""
	I1206 10:12:18.987480  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.987489  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:18.987495  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:18.987559  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:19.016315  293728 cri.go:89] found id: ""
	I1206 10:12:19.016359  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.016369  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:19.016376  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:19.016457  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:19.046838  293728 cri.go:89] found id: ""
	I1206 10:12:19.046914  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.046939  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:19.046958  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:19.047088  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:19.076303  293728 cri.go:89] found id: ""
	I1206 10:12:19.076339  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.076348  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:19.076355  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:19.076424  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:19.100478  293728 cri.go:89] found id: ""
	I1206 10:12:19.100505  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.100514  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:19.100520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:19.100600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:19.125238  293728 cri.go:89] found id: ""
	I1206 10:12:19.125303  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.125317  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:19.125327  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:19.125338  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:19.181824  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:19.181858  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:19.195937  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:19.195963  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:19.288898  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:19.278661   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.279479   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.281324   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.282074   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.284115   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:19.278661   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.279479   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.281324   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.282074   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.284115   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:19.288922  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:19.288935  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:19.314454  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:19.314487  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:21.845581  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:21.856143  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:21.856207  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:21.880174  293728 cri.go:89] found id: ""
	I1206 10:12:21.880197  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.880206  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:21.880212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:21.880273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:21.906162  293728 cri.go:89] found id: ""
	I1206 10:12:21.906195  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.906204  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:21.906209  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:21.906277  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:21.929912  293728 cri.go:89] found id: ""
	I1206 10:12:21.929936  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.929945  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:21.929951  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:21.930017  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:21.955257  293728 cri.go:89] found id: ""
	I1206 10:12:21.955288  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.955297  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:21.955303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:21.955403  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:21.988656  293728 cri.go:89] found id: ""
	I1206 10:12:21.988682  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.988691  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:21.988698  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:21.988766  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:22.026205  293728 cri.go:89] found id: ""
	I1206 10:12:22.026232  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.026241  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:22.026248  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:22.026321  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:22.056883  293728 cri.go:89] found id: ""
	I1206 10:12:22.056906  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.056915  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:22.056923  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:22.056983  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:22.087245  293728 cri.go:89] found id: ""
	I1206 10:12:22.087269  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.087277  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:22.087286  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:22.087296  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:22.148181  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:22.148213  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:22.161924  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:22.161952  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:22.238449  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:22.229386   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.230236   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.231860   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.232500   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.234010   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:22.229386   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.230236   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.231860   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.232500   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.234010   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:22.238523  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:22.238550  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:22.268691  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:22.268765  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:24.800715  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:24.811471  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:24.811557  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:24.836234  293728 cri.go:89] found id: ""
	I1206 10:12:24.836261  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.836270  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:24.836277  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:24.836335  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:24.861915  293728 cri.go:89] found id: ""
	I1206 10:12:24.861942  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.861951  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:24.861957  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:24.862015  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:24.886931  293728 cri.go:89] found id: ""
	I1206 10:12:24.886958  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.886968  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:24.886974  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:24.887058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:24.913606  293728 cri.go:89] found id: ""
	I1206 10:12:24.913633  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.913642  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:24.913649  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:24.913708  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:24.942656  293728 cri.go:89] found id: ""
	I1206 10:12:24.942690  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.942699  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:24.942706  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:24.942772  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:24.973528  293728 cri.go:89] found id: ""
	I1206 10:12:24.973563  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.973572  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:24.973579  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:24.973654  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:25.011969  293728 cri.go:89] found id: ""
	I1206 10:12:25.012007  293728 logs.go:282] 0 containers: []
	W1206 10:12:25.012017  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:25.012024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:25.012105  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:25.041306  293728 cri.go:89] found id: ""
	I1206 10:12:25.041340  293728 logs.go:282] 0 containers: []
	W1206 10:12:25.041349  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:25.041363  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:25.041377  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:25.068464  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:25.068503  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:25.098409  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:25.098436  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:25.156122  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:25.156158  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:25.170373  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:25.170405  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:25.248624  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:25.240035   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.240794   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.242472   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.243030   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.244596   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:25.240035   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.240794   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.242472   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.243030   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.244596   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:27.748906  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:27.759522  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:27.759591  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:27.785227  293728 cri.go:89] found id: ""
	I1206 10:12:27.785250  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.785258  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:27.785264  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:27.785319  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:27.810979  293728 cri.go:89] found id: ""
	I1206 10:12:27.811011  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.811021  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:27.811028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:27.811085  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:27.837232  293728 cri.go:89] found id: ""
	I1206 10:12:27.837298  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.837313  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:27.837320  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:27.837376  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:27.861601  293728 cri.go:89] found id: ""
	I1206 10:12:27.861625  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.861634  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:27.861641  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:27.861699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:27.886862  293728 cri.go:89] found id: ""
	I1206 10:12:27.886887  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.886897  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:27.886903  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:27.886960  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:27.911189  293728 cri.go:89] found id: ""
	I1206 10:12:27.911213  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.911222  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:27.911229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:27.911285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:27.935326  293728 cri.go:89] found id: ""
	I1206 10:12:27.935352  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.935361  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:27.935368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:27.935452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:27.959524  293728 cri.go:89] found id: ""
	I1206 10:12:27.959545  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.959555  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:27.959564  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:27.959575  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:28.028099  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:28.028143  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:28.048460  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:28.048488  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:28.118674  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:28.109022   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.109888   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.111697   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.112355   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.114062   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:28.109022   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.109888   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.111697   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.112355   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.114062   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:28.118697  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:28.118709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:28.144591  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:28.144630  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:30.673088  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:30.683869  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:30.683949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:30.708341  293728 cri.go:89] found id: ""
	I1206 10:12:30.708364  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.708372  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:30.708379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:30.708434  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:30.734236  293728 cri.go:89] found id: ""
	I1206 10:12:30.734261  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.734270  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:30.734276  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:30.734333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:30.760476  293728 cri.go:89] found id: ""
	I1206 10:12:30.760499  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.760508  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:30.760520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:30.760580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:30.785771  293728 cri.go:89] found id: ""
	I1206 10:12:30.785793  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.785802  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:30.785808  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:30.785871  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:30.814408  293728 cri.go:89] found id: ""
	I1206 10:12:30.814431  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.814439  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:30.814445  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:30.814504  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:30.840084  293728 cri.go:89] found id: ""
	I1206 10:12:30.840108  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.840117  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:30.840124  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:30.840183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:30.865698  293728 cri.go:89] found id: ""
	I1206 10:12:30.865723  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.865732  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:30.865745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:30.865807  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:30.895469  293728 cri.go:89] found id: ""
	I1206 10:12:30.895538  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.895553  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:30.895562  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:30.895573  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:30.952609  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:30.952644  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:30.966729  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:30.966758  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:31.059967  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:31.049168   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.050975   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.051825   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.053830   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.054324   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:31.049168   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.050975   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.051825   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.053830   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.054324   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:31.059992  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:31.060006  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:31.087739  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:31.087785  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:33.618907  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:33.633558  293728 out.go:203] 
	W1206 10:12:33.636407  293728 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1206 10:12:33.636439  293728 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1206 10:12:33.636448  293728 out.go:285] * Related issues:
	* Related issues:
	W1206 10:12:33.636468  293728 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1206 10:12:33.636488  293728 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1206 10:12:33.640150  293728 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-387337
helpers_test.go:243: (dbg) docker inspect newest-cni-387337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	        "Created": "2025-12-06T09:56:17.358293629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:06:25.490985794Z",
	            "FinishedAt": "2025-12-06T10:06:24.07452303Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hosts",
	        "LogPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9-json.log",
	        "Name": "/newest-cni-387337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-387337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-387337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	                "LowerDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-387337",
	                "Source": "/var/lib/docker/volumes/newest-cni-387337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-387337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-387337",
	                "name.minikube.sigs.k8s.io": "newest-cni-387337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0237cbac4089b5971baf99dcc5f5da9d321416f1c02aecd4eecab8f5eca5da8a",
	            "SandboxKey": "/var/run/docker/netns/0237cbac4089",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-387337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:c0:9f:b1:4f:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f42a70d42248e7fb537c8957fc3c9ad0a04046b4da244cdde31b86ebc56a160b",
	                    "EndpointID": "315fc1e3324af45e0df5a53d34bf5d6797d7154b55022bdff9ab7809e194b0cf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-387337",
	                        "e89a14c7a996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (316.169281ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-387337 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-387337 logs -n 25: (1.659982504s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ image   │ embed-certs-100767 image list --format=json                                                                                                                                                                                                                │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-257359 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-257359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-387337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:04 UTC │                     │
	│ stop    │ -p newest-cni-387337 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │ 06 Dec 25 10:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-387337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │ 06 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:06:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:06:25.195145  293728 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:06:25.195325  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195335  293728 out.go:374] Setting ErrFile to fd 2...
	I1206 10:06:25.195341  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195634  293728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:06:25.196028  293728 out.go:368] Setting JSON to false
	I1206 10:06:25.196926  293728 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6537,"bootTime":1765009049,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:06:25.196997  293728 start.go:143] virtualization:  
	I1206 10:06:25.199959  293728 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:06:25.203880  293728 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:06:25.204017  293728 notify.go:221] Checking for updates...
	I1206 10:06:25.210368  293728 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:06:25.213374  293728 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:25.216371  293728 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:06:25.221036  293728 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:06:25.223973  293728 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:06:25.227572  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:25.228243  293728 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:06:25.261513  293728 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:06:25.261626  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.340601  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.331029372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.340708  293728 docker.go:319] overlay module found
	I1206 10:06:25.343872  293728 out.go:179] * Using the docker driver based on existing profile
	I1206 10:06:25.346835  293728 start.go:309] selected driver: docker
	I1206 10:06:25.346867  293728 start.go:927] validating driver "docker" against &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.346969  293728 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:06:25.347911  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.407260  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.398348793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.407652  293728 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 10:06:25.407684  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:25.407750  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:25.407788  293728 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.410983  293728 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 10:06:25.413800  293728 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:06:25.416704  293728 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:06:25.419472  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:25.419517  293728 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:06:25.419530  293728 cache.go:65] Caching tarball of preloaded images
	I1206 10:06:25.419542  293728 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:06:25.419614  293728 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:06:25.419624  293728 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:06:25.419745  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.439065  293728 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:06:25.439097  293728 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:06:25.439117  293728 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:06:25.439151  293728 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:06:25.439218  293728 start.go:364] duration metric: took 44.948µs to acquireMachinesLock for "newest-cni-387337"
	I1206 10:06:25.439242  293728 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:06:25.439250  293728 fix.go:54] fixHost starting: 
	I1206 10:06:25.439553  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.455936  293728 fix.go:112] recreateIfNeeded on newest-cni-387337: state=Stopped err=<nil>
	W1206 10:06:25.455970  293728 fix.go:138] unexpected machine state, will restart: <nil>
	W1206 10:06:22.222571  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:24.223444  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:25.459174  293728 out.go:252] * Restarting existing docker container for "newest-cni-387337" ...
	I1206 10:06:25.459260  293728 cli_runner.go:164] Run: docker start newest-cni-387337
	I1206 10:06:25.713574  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.738668  293728 kic.go:430] container "newest-cni-387337" state is running.
	I1206 10:06:25.739140  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:25.765706  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.766035  293728 machine.go:94] provisionDockerMachine start ...
	I1206 10:06:25.766147  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:25.787280  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:25.787973  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:25.787996  293728 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:06:25.789031  293728 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:06:28.943483  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:28.943510  293728 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 10:06:28.943583  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:28.962379  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:28.962708  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:28.962726  293728 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 10:06:29.136463  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:29.136552  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.155008  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:29.155343  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:29.155363  293728 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:06:29.311555  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:06:29.311646  293728 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:06:29.311703  293728 ubuntu.go:190] setting up certificates
	I1206 10:06:29.311733  293728 provision.go:84] configureAuth start
	I1206 10:06:29.311826  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.328361  293728 provision.go:143] copyHostCerts
	I1206 10:06:29.328435  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:06:29.328455  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:06:29.328532  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:06:29.328644  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:06:29.328655  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:06:29.328683  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:06:29.328754  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:06:29.328763  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:06:29.328788  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:06:29.328850  293728 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 10:06:29.477422  293728 provision.go:177] copyRemoteCerts
	I1206 10:06:29.477497  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:06:29.477551  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.495349  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.603554  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:06:29.622338  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:06:29.641011  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:06:29.660417  293728 provision.go:87] duration metric: took 348.656521ms to configureAuth
	I1206 10:06:29.660488  293728 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:06:29.660700  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:29.660714  293728 machine.go:97] duration metric: took 3.894659315s to provisionDockerMachine
	I1206 10:06:29.660722  293728 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 10:06:29.660734  293728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:06:29.660787  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:06:29.660840  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.679336  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.792654  293728 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:06:29.796414  293728 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:06:29.796451  293728 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:06:29.796481  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:06:29.796555  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:06:29.796637  293728 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:06:29.796752  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:06:29.804466  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:29.822913  293728 start.go:296] duration metric: took 162.176035ms for postStartSetup
	I1206 10:06:29.822993  293728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:06:29.823033  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.841962  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.944706  293728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:06:29.949621  293728 fix.go:56] duration metric: took 4.510364001s for fixHost
	I1206 10:06:29.949690  293728 start.go:83] releasing machines lock for "newest-cni-387337", held for 4.510458303s
	I1206 10:06:29.949801  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.966982  293728 ssh_runner.go:195] Run: cat /version.json
	I1206 10:06:29.967044  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.967315  293728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:06:29.967425  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.989346  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.995399  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:30.108934  293728 ssh_runner.go:195] Run: systemctl --version
	W1206 10:06:26.722852  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:29.222555  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:30.251570  293728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:06:30.256600  293728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:06:30.256686  293728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:06:30.265366  293728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:06:30.265436  293728 start.go:496] detecting cgroup driver to use...
	I1206 10:06:30.265475  293728 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:06:30.265547  293728 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:06:30.285393  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:06:30.300014  293728 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:06:30.300101  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:06:30.316388  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:06:30.330703  293728 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:06:30.447811  293728 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:06:30.578928  293728 docker.go:234] disabling docker service ...
	I1206 10:06:30.579012  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:06:30.595245  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:06:30.608936  293728 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:06:30.732584  293728 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:06:30.854426  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:06:30.867755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:06:30.882294  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:06:30.891997  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:06:30.901695  293728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:06:30.901766  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:06:30.911307  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.920864  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:06:30.930280  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.939955  293728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:06:30.948517  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:06:30.957894  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:06:30.967715  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:06:30.977793  293728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:06:30.985557  293728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:06:30.993239  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.114748  293728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:06:31.239476  293728 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:06:31.239597  293728 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:06:31.244664  293728 start.go:564] Will wait 60s for crictl version
	I1206 10:06:31.244770  293728 ssh_runner.go:195] Run: which crictl
	I1206 10:06:31.249231  293728 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:06:31.276528  293728 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:06:31.276637  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.298790  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.323558  293728 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:06:31.326534  293728 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:06:31.343556  293728 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 10:06:31.347752  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.361512  293728 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 10:06:31.364437  293728 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:06:31.364599  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:31.364692  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.390507  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.390542  293728 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:06:31.390602  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.417903  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.417928  293728 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:06:31.417937  293728 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 10:06:31.418044  293728 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:06:31.418117  293728 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:06:31.443849  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:31.443876  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:31.443900  293728 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 10:06:31.443924  293728 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:06:31.444044  293728 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:06:31.444118  293728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:06:31.452187  293728 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:06:31.452301  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:06:31.460150  293728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:06:31.473854  293728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:06:31.487946  293728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 10:06:31.501615  293728 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:06:31.505530  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.516062  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.633832  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:31.655929  293728 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 10:06:31.655955  293728 certs.go:195] generating shared ca certs ...
	I1206 10:06:31.655972  293728 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:31.656127  293728 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:06:31.656182  293728 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:06:31.656198  293728 certs.go:257] generating profile certs ...
	I1206 10:06:31.656306  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 10:06:31.656372  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 10:06:31.656419  293728 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 10:06:31.656536  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:06:31.656576  293728 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:06:31.656590  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:06:31.656620  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:06:31.656647  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:06:31.656675  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:06:31.656737  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:31.657407  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:06:31.678086  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:06:31.699851  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:06:31.722100  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:06:31.743193  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:06:31.762896  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:06:31.781616  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:06:31.801280  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:06:31.819401  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:06:31.838552  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:06:31.856936  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:06:31.875547  293728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:06:31.888930  293728 ssh_runner.go:195] Run: openssl version
	I1206 10:06:31.895342  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.903529  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:06:31.911304  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915287  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915352  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.961696  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:06:31.970315  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.981710  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:06:31.992227  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996668  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996744  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:06:32.043296  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:06:32.051139  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.058979  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:06:32.066993  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071120  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071217  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.113955  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:06:32.121998  293728 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:06:32.126168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:06:32.167933  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:06:32.209594  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:06:32.252826  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:06:32.295168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:06:32.336384  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:06:32.377923  293728 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:32.378019  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:06:32.378107  293728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:06:32.406152  293728 cri.go:89] found id: ""
	I1206 10:06:32.406224  293728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:06:32.414373  293728 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:06:32.414394  293728 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:06:32.414444  293728 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:06:32.422214  293728 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:06:32.422855  293728 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-387337" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.423179  293728 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-387337" cluster setting kubeconfig missing "newest-cni-387337" context setting]
	I1206 10:06:32.423737  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.425135  293728 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:06:32.433653  293728 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 10:06:32.433689  293728 kubeadm.go:602] duration metric: took 19.289872ms to restartPrimaryControlPlane
	I1206 10:06:32.433699  293728 kubeadm.go:403] duration metric: took 55.791147ms to StartCluster
	I1206 10:06:32.433714  293728 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.433786  293728 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.434769  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.434995  293728 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:06:32.435318  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:32.435370  293728 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:06:32.435471  293728 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-387337"
	I1206 10:06:32.435485  293728 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-387337"
	I1206 10:06:32.435510  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435575  293728 addons.go:70] Setting dashboard=true in profile "newest-cni-387337"
	I1206 10:06:32.435608  293728 addons.go:239] Setting addon dashboard=true in "newest-cni-387337"
	W1206 10:06:32.435630  293728 addons.go:248] addon dashboard should already be in state true
	I1206 10:06:32.435689  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435986  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436310  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436715  293728 addons.go:70] Setting default-storageclass=true in profile "newest-cni-387337"
	I1206 10:06:32.436742  293728 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-387337"
	I1206 10:06:32.437054  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.440794  293728 out.go:179] * Verifying Kubernetes components...
	I1206 10:06:32.443631  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:32.498221  293728 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 10:06:32.501060  293728 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 10:06:32.503631  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 10:06:32.503654  293728 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 10:06:32.503744  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.508648  293728 addons.go:239] Setting addon default-storageclass=true in "newest-cni-387337"
	I1206 10:06:32.508690  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.509493  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.523049  293728 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:06:32.526921  293728 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.526947  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:06:32.527022  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.570818  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.571691  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.595638  293728 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.595658  293728 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:06:32.595716  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.624247  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.694342  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:32.746370  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 10:06:32.746390  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 10:06:32.765644  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.786998  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 10:06:32.787020  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 10:06:32.804870  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.820938  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 10:06:32.821012  293728 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 10:06:32.877095  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 10:06:32.877165  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 10:06:32.903565  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 10:06:32.903593  293728 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 10:06:32.916625  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 10:06:32.916699  293728 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 10:06:32.930049  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 10:06:32.930072  293728 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 10:06:32.943222  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 10:06:32.943248  293728 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 10:06:32.958124  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:32.958148  293728 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 10:06:32.971454  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:33.482958  293728 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:06:33.483036  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:33.483155  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483183  293728 retry.go:31] will retry after 318.519734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483231  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483244  293728 retry.go:31] will retry after 239.813026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483501  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483518  293728 retry.go:31] will retry after 128.431008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.612510  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:33.679631  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.679670  293728 retry.go:31] will retry after 494.781452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.723639  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:33.790368  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.790401  293728 retry.go:31] will retry after 373.145908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.802573  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:33.864526  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.864571  293728 retry.go:31] will retry after 555.783365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.983818  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.164188  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:34.174768  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:34.315072  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.315120  293728 retry.go:31] will retry after 679.653646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:34.319455  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.319548  293728 retry.go:31] will retry after 695.531102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.421513  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:34.483690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:34.487662  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.487697  293728 retry.go:31] will retry after 692.225187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.983561  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.995819  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:35.016010  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:35.122122  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.122225  293728 retry.go:31] will retry after 1.142566381s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.138887  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.138925  293728 retry.go:31] will retry after 649.678663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.180839  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:31.222846  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:33.722513  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:35.247363  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.247415  293728 retry.go:31] will retry after 580.881907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.483771  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:35.788736  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:35.829213  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:35.856520  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.856598  293728 retry.go:31] will retry after 1.553154314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.896812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.896844  293728 retry.go:31] will retry after 933.683215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.984035  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.265085  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:36.326884  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.326918  293728 retry.go:31] will retry after 708.086155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.484141  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.831542  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:36.897118  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.897156  293728 retry.go:31] will retry after 1.33074055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.983504  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.035538  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:37.096009  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.096042  293728 retry.go:31] will retry after 1.790090237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.410554  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:37.480541  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.480578  293728 retry.go:31] will retry after 966.279559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.483641  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.984118  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:38.228242  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:38.293907  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.293942  293728 retry.go:31] will retry after 2.616205885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.447170  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:38.483864  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:38.514147  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.514181  293728 retry.go:31] will retry after 2.714109668s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.886857  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:38.951997  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.952029  293728 retry.go:31] will retry after 2.462359856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.983614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.483264  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.983242  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:35.723224  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:38.222673  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:40.483248  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:40.910479  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:40.983819  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:40.985785  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:40.985821  293728 retry.go:31] will retry after 2.652074408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.229298  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:41.298980  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.299018  293728 retry.go:31] will retry after 3.795353676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.415143  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:41.478696  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.478758  293728 retry.go:31] will retry after 5.28721939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.483845  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:41.983945  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.483250  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.483241  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.638309  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:43.697835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.697874  293728 retry.go:31] will retry after 4.887793633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.983195  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.483546  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.983775  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.095370  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:45.192562  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:45.192602  293728 retry.go:31] will retry after 8.015655906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:40.722829  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:42.723326  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:45.223605  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:45.483497  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.984044  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.483220  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.766179  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:46.829923  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.829956  293728 retry.go:31] will retry after 4.667102636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.984011  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.483312  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.984058  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.586389  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:48.650814  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.650848  293728 retry.go:31] will retry after 13.339615646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.983299  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.483453  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.983414  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:47.722614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:50.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:50.483943  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:50.983588  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.497329  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:51.584226  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.584262  293728 retry.go:31] will retry after 10.765270657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.983783  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.484023  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.983169  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.208585  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:53.275063  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.275124  293728 retry.go:31] will retry after 12.265040886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.983886  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.483520  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.983246  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:52.722502  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:54.722548  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:55.484066  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:55.983753  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.483532  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.983522  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.483514  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.983263  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.483994  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.983173  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.483759  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.983187  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:56.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:58.723298  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:00.483755  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:00.984174  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.983995  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.991432  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:02.091463  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.091500  293728 retry.go:31] will retry after 13.890333948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.349878  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:02.411835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.411870  293728 retry.go:31] will retry after 7.977295138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.483150  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:02.983902  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.483778  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.983278  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.483894  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.983934  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:01.222997  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:03.722642  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:05.483794  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:05.540834  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:05.606800  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.606832  293728 retry.go:31] will retry after 11.29369971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.983418  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.983887  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.483439  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.984054  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.483236  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.983521  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.483231  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:06.222598  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:08.222649  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:10.390061  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:10.460795  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.460828  293728 retry.go:31] will retry after 24.523063216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.483989  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:10.983508  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.483968  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.983921  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.983503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.483736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.983533  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.483788  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.983198  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:10.722891  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:13.222531  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:15.223567  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:15.483180  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:15.982114  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:07:15.983591  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:16.054278  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.054318  293728 retry.go:31] will retry after 20.338606766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.484114  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:16.901533  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:07:16.984157  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:17.001960  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.001998  293728 retry.go:31] will retry after 24.827417164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.483261  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:17.983420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.983281  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.483741  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.983176  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:17.722636  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:20.222572  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:20.483695  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:20.983984  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.483862  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.483812  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.983632  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.483796  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.984175  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:22.222705  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:24.723752  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:25.483633  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:25.984006  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.483830  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.983203  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.483211  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.983237  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.484156  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.983736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.483880  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.984116  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:27.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:29.223485  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:30.483549  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:30.983243  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.483786  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.983608  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:32.483844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:32.483952  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:32.508469  293728 cri.go:89] found id: ""
	I1206 10:07:32.508497  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.508505  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:32.508512  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:32.508574  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:32.533265  293728 cri.go:89] found id: ""
	I1206 10:07:32.533288  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.533297  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:32.533303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:32.533364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:32.562655  293728 cri.go:89] found id: ""
	I1206 10:07:32.562686  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.562695  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:32.562702  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:32.562769  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:32.587755  293728 cri.go:89] found id: ""
	I1206 10:07:32.587781  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.587789  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:32.587796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:32.587855  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:32.613253  293728 cri.go:89] found id: ""
	I1206 10:07:32.613284  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.613292  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:32.613305  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:32.613364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:32.638621  293728 cri.go:89] found id: ""
	I1206 10:07:32.638648  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.638656  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:32.638662  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:32.638775  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:32.663624  293728 cri.go:89] found id: ""
	I1206 10:07:32.663649  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.663657  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:32.663664  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:32.663724  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:32.687850  293728 cri.go:89] found id: ""
	I1206 10:07:32.687872  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.687881  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:32.687890  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:32.687901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:32.763755  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:32.763831  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:32.788174  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:32.788242  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:32.866103  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:32.866126  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:32.866138  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:32.891711  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:32.891745  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:34.985041  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:35.094954  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:35.094988  293728 retry.go:31] will retry after 34.21540436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:07:31.722556  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:33.722685  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:35.421586  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:35.432096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:35.432164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:35.457419  293728 cri.go:89] found id: ""
	I1206 10:07:35.457442  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.457451  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:35.457457  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:35.457520  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:35.481490  293728 cri.go:89] found id: ""
	I1206 10:07:35.481513  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.481521  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:35.481527  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:35.481586  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:35.506409  293728 cri.go:89] found id: ""
	I1206 10:07:35.506432  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.506441  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:35.506447  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:35.506512  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:35.534896  293728 cri.go:89] found id: ""
	I1206 10:07:35.534923  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.534932  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:35.534939  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:35.534997  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:35.560020  293728 cri.go:89] found id: ""
	I1206 10:07:35.560043  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.560052  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:35.560058  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:35.560115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:35.584963  293728 cri.go:89] found id: ""
	I1206 10:07:35.585028  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.585042  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:35.585049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:35.585110  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:35.617464  293728 cri.go:89] found id: ""
	I1206 10:07:35.617487  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.617495  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:35.617501  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:35.617562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:35.642187  293728 cri.go:89] found id: ""
	I1206 10:07:35.642219  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.642228  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:35.642238  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:35.642250  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:35.655709  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:35.655738  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:35.728266  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:35.728336  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:35.728379  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:35.766222  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:35.766301  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:35.823000  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:35.823024  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:36.393185  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:36.458951  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:36.458990  293728 retry.go:31] will retry after 24.220809087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:38.379270  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:38.389923  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:38.389993  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:38.416450  293728 cri.go:89] found id: ""
	I1206 10:07:38.416517  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.416540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:38.416558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:38.416635  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:38.442635  293728 cri.go:89] found id: ""
	I1206 10:07:38.442663  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.442672  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:38.442680  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:38.442742  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:38.469797  293728 cri.go:89] found id: ""
	I1206 10:07:38.469824  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.469834  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:38.469840  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:38.469899  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:38.497073  293728 cri.go:89] found id: ""
	I1206 10:07:38.497098  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.497107  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:38.497113  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:38.497194  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:38.527432  293728 cri.go:89] found id: ""
	I1206 10:07:38.527465  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.527474  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:38.527481  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:38.527540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:38.554253  293728 cri.go:89] found id: ""
	I1206 10:07:38.554278  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.554290  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:38.554300  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:38.554368  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:38.580022  293728 cri.go:89] found id: ""
	I1206 10:07:38.580070  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.580080  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:38.580087  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:38.580165  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:38.604967  293728 cri.go:89] found id: ""
	I1206 10:07:38.604992  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.605001  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:38.605010  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:38.605041  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:38.672012  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:38.672044  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:38.672075  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:38.697533  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:38.697567  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:38.750151  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:38.750176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:38.835463  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:38.835500  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:07:35.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:38.222743  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:41.350690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:41.361865  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:41.361934  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:41.387755  293728 cri.go:89] found id: ""
	I1206 10:07:41.387781  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.387789  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:41.387796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:41.387854  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:41.412482  293728 cri.go:89] found id: ""
	I1206 10:07:41.412510  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.412519  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:41.412526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:41.412591  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:41.437604  293728 cri.go:89] found id: ""
	I1206 10:07:41.437635  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.437644  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:41.437650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:41.437722  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:41.462503  293728 cri.go:89] found id: ""
	I1206 10:07:41.462573  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.462597  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:41.462616  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:41.462703  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:41.487720  293728 cri.go:89] found id: ""
	I1206 10:07:41.487742  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.487750  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:41.487757  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:41.487819  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:41.513291  293728 cri.go:89] found id: ""
	I1206 10:07:41.513321  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.513332  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:41.513342  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:41.513420  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:41.547109  293728 cri.go:89] found id: ""
	I1206 10:07:41.547132  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.547141  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:41.547147  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:41.547209  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:41.572514  293728 cri.go:89] found id: ""
	I1206 10:07:41.572585  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.572607  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:41.572628  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:41.572669  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:41.629345  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:41.629378  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:41.643897  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:41.643928  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:41.713946  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:41.714006  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:41.714025  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:41.745589  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:41.745645  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:41.830134  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:41.893553  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:41.893593  293728 retry.go:31] will retry after 44.351115962s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:44.324517  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:44.335432  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:44.335507  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:44.365594  293728 cri.go:89] found id: ""
	I1206 10:07:44.365621  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.365630  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:44.365637  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:44.365723  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:44.390876  293728 cri.go:89] found id: ""
	I1206 10:07:44.390909  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.390919  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:44.390944  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:44.391026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:44.421424  293728 cri.go:89] found id: ""
	I1206 10:07:44.421448  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.421462  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:44.421468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:44.421525  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:44.445299  293728 cri.go:89] found id: ""
	I1206 10:07:44.445325  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.445335  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:44.445341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:44.445454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:44.473977  293728 cri.go:89] found id: ""
	I1206 10:07:44.473999  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.474008  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:44.474014  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:44.474072  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:44.501273  293728 cri.go:89] found id: ""
	I1206 10:07:44.501299  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.501308  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:44.501341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:44.501415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:44.525106  293728 cri.go:89] found id: ""
	I1206 10:07:44.525136  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.525154  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:44.525161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:44.525223  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:44.550546  293728 cri.go:89] found id: ""
	I1206 10:07:44.550571  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.550580  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:44.550589  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:44.550600  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:44.615941  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:44.615962  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:44.615975  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:44.641346  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:44.641377  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:44.669493  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:44.669520  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:44.727196  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:44.727357  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:07:40.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:43.222679  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:45.222775  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:47.260652  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:47.271164  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:47.271238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:47.295481  293728 cri.go:89] found id: ""
	I1206 10:07:47.295506  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.295515  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:47.295521  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:47.295581  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:47.321861  293728 cri.go:89] found id: ""
	I1206 10:07:47.321884  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.321892  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:47.321898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:47.321954  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:47.346071  293728 cri.go:89] found id: ""
	I1206 10:07:47.346094  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.346103  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:47.346110  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:47.346169  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:47.373210  293728 cri.go:89] found id: ""
	I1206 10:07:47.373234  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.373242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:47.373249  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:47.373312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:47.403706  293728 cri.go:89] found id: ""
	I1206 10:07:47.403729  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.403739  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:47.403745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:47.403810  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:47.433807  293728 cri.go:89] found id: ""
	I1206 10:07:47.433831  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.433840  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:47.433847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:47.433904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:47.462210  293728 cri.go:89] found id: ""
	I1206 10:07:47.462233  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.462241  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:47.462247  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:47.462308  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:47.486445  293728 cri.go:89] found id: ""
	I1206 10:07:47.486523  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.486546  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:47.486567  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:47.486597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:47.500083  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:47.500114  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:47.568637  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:47.568661  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:47.568683  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:47.598178  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:47.598213  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:47.629224  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:47.629249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.187574  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:47.727856  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:50.223331  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:50.198529  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:50.198609  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:50.224708  293728 cri.go:89] found id: ""
	I1206 10:07:50.224731  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.224738  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:50.224744  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:50.224806  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:50.253337  293728 cri.go:89] found id: ""
	I1206 10:07:50.253361  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.253370  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:50.253376  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:50.253433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:50.278723  293728 cri.go:89] found id: ""
	I1206 10:07:50.278750  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.278759  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:50.278766  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:50.278830  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:50.308736  293728 cri.go:89] found id: ""
	I1206 10:07:50.308803  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.308822  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:50.308834  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:50.308894  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:50.333136  293728 cri.go:89] found id: ""
	I1206 10:07:50.333162  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.333171  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:50.333177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:50.333263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:50.358071  293728 cri.go:89] found id: ""
	I1206 10:07:50.358105  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.358114  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:50.358137  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:50.358215  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:50.382078  293728 cri.go:89] found id: ""
	I1206 10:07:50.382111  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.382120  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:50.382141  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:50.382222  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:50.407225  293728 cri.go:89] found id: ""
	I1206 10:07:50.407261  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.407270  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:50.407279  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:50.407291  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.466553  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:50.466588  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:50.480420  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:50.480450  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:50.546503  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:50.546523  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:50.546546  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:50.573208  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:50.573243  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.100604  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:53.111611  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:53.111683  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:53.136465  293728 cri.go:89] found id: ""
	I1206 10:07:53.136494  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.136503  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:53.136510  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:53.136584  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:53.167397  293728 cri.go:89] found id: ""
	I1206 10:07:53.167419  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.167427  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:53.167433  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:53.167501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:53.191735  293728 cri.go:89] found id: ""
	I1206 10:07:53.191769  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.191778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:53.191784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:53.191849  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:53.216472  293728 cri.go:89] found id: ""
	I1206 10:07:53.216495  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.216506  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:53.216513  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:53.216570  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:53.242936  293728 cri.go:89] found id: ""
	I1206 10:07:53.242957  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.242966  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:53.242972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:53.243035  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:53.274015  293728 cri.go:89] found id: ""
	I1206 10:07:53.274041  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.274050  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:53.274056  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:53.274118  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:53.303348  293728 cri.go:89] found id: ""
	I1206 10:07:53.303371  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.303415  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:53.303422  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:53.303486  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:53.332691  293728 cri.go:89] found id: ""
	I1206 10:07:53.332716  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.332724  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:53.332733  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:53.332749  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:53.346274  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:53.346303  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:53.412178  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:53.412203  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:53.412216  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:53.437974  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:53.438008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.469789  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:53.469816  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:07:52.723301  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:55.222438  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:56.029614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:56.044312  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:56.044385  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:56.074035  293728 cri.go:89] found id: ""
	I1206 10:07:56.074061  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.074071  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:56.074077  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:56.074137  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:56.101362  293728 cri.go:89] found id: ""
	I1206 10:07:56.101387  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.101397  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:56.101403  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:56.101472  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:56.132837  293728 cri.go:89] found id: ""
	I1206 10:07:56.132867  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.132876  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:56.132882  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:56.132949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:56.162095  293728 cri.go:89] found id: ""
	I1206 10:07:56.162121  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.162129  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:56.162136  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:56.162195  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:56.190088  293728 cri.go:89] found id: ""
	I1206 10:07:56.190113  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.190122  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:56.190128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:56.190188  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:56.217327  293728 cri.go:89] found id: ""
	I1206 10:07:56.217355  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.217365  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:56.217372  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:56.217432  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:56.242210  293728 cri.go:89] found id: ""
	I1206 10:07:56.242246  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.242255  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:56.242261  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:56.242330  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:56.266843  293728 cri.go:89] found id: ""
	I1206 10:07:56.266871  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.266879  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:56.266888  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:56.266900  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:56.324906  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:56.324941  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:56.339074  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:56.339111  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:56.407395  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:56.407417  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:56.407434  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:56.433408  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:56.433442  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:58.962420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:58.984606  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:58.984688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:59.037604  293728 cri.go:89] found id: ""
	I1206 10:07:59.037795  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.038054  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:59.038096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:59.038236  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:59.074512  293728 cri.go:89] found id: ""
	I1206 10:07:59.074555  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.074564  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:59.074571  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:59.074638  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:59.101868  293728 cri.go:89] found id: ""
	I1206 10:07:59.101895  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.101904  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:59.101910  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:59.101973  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:59.127188  293728 cri.go:89] found id: ""
	I1206 10:07:59.127214  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.127223  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:59.127230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:59.127286  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:59.152234  293728 cri.go:89] found id: ""
	I1206 10:07:59.152259  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.152268  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:59.152274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:59.152342  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:59.177629  293728 cri.go:89] found id: ""
	I1206 10:07:59.177654  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.177663  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:59.177670  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:59.177728  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:59.202156  293728 cri.go:89] found id: ""
	I1206 10:07:59.202185  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.202195  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:59.202201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:59.202261  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:59.227130  293728 cri.go:89] found id: ""
	I1206 10:07:59.227165  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.227174  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:59.227183  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:59.227204  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:59.241522  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:59.241597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:59.311704  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:59.311730  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:59.311742  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:59.337213  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:59.337246  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:59.365911  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:59.365940  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:07:57.222678  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:59.223226  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:00.680788  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:08:00.745958  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:00.746077  293728 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:01.925540  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:01.936468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:01.936592  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:01.965164  293728 cri.go:89] found id: ""
	I1206 10:08:01.965242  293728 logs.go:282] 0 containers: []
	W1206 10:08:01.965277  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:01.965302  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:01.965393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:02.013736  293728 cri.go:89] found id: ""
	I1206 10:08:02.013774  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.013783  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:02.013790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:02.013862  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:02.058535  293728 cri.go:89] found id: ""
	I1206 10:08:02.058627  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.058651  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:02.058685  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:02.058798  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:02.091149  293728 cri.go:89] found id: ""
	I1206 10:08:02.091213  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.091242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:02.091286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:02.091460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:02.116844  293728 cri.go:89] found id: ""
	I1206 10:08:02.116870  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.116878  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:02.116884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:02.116945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:02.143338  293728 cri.go:89] found id: ""
	I1206 10:08:02.143439  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.143463  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:02.143485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:02.143573  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:02.169310  293728 cri.go:89] found id: ""
	I1206 10:08:02.169333  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.169342  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:02.169348  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:02.169410  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:02.200025  293728 cri.go:89] found id: ""
	I1206 10:08:02.200096  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.200104  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:02.200113  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:02.200125  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:02.257304  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:02.257340  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:02.271507  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:02.271541  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:02.341058  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:02.341084  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:02.341097  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:02.367636  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:02.367672  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:04.899503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:04.910154  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:04.910231  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:04.934598  293728 cri.go:89] found id: ""
	I1206 10:08:04.934623  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.934632  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:04.934638  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:04.934699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:04.959971  293728 cri.go:89] found id: ""
	I1206 10:08:04.959995  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.960004  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:04.960010  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:04.960071  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:05.027645  293728 cri.go:89] found id: ""
	I1206 10:08:05.027668  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.027677  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:05.027683  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:05.027758  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:05.077828  293728 cri.go:89] found id: ""
	I1206 10:08:05.077868  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.077878  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:05.077884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:05.077946  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:05.103986  293728 cri.go:89] found id: ""
	I1206 10:08:05.104014  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.104023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:05.104029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:05.104091  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:05.129703  293728 cri.go:89] found id: ""
	I1206 10:08:05.129778  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.129822  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:05.129843  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:05.129930  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:05.156958  293728 cri.go:89] found id: ""
	I1206 10:08:05.156982  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.156990  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:05.156996  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:05.157058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:05.182537  293728 cri.go:89] found id: ""
	I1206 10:08:05.182565  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.182575  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:05.182585  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:05.182598  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:08:01.722650  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:04.222533  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:05.196389  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:05.196419  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:05.262239  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:05.262265  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:05.262278  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:05.288138  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:05.288178  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:05.316468  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:05.316497  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:07.872986  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:07.886594  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:07.886666  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:07.912554  293728 cri.go:89] found id: ""
	I1206 10:08:07.912580  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.912589  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:07.912595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:07.912668  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:07.938006  293728 cri.go:89] found id: ""
	I1206 10:08:07.938033  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.938042  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:07.938049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:07.938107  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:07.967969  293728 cri.go:89] found id: ""
	I1206 10:08:07.967995  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.968004  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:07.968011  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:07.968079  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:08.001472  293728 cri.go:89] found id: ""
	I1206 10:08:08.001495  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.001504  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:08.001511  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:08.001577  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:08.064509  293728 cri.go:89] found id: ""
	I1206 10:08:08.064538  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.064547  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:08.064554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:08.064612  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:08.094308  293728 cri.go:89] found id: ""
	I1206 10:08:08.094376  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.094402  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:08.094434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:08.094522  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:08.124650  293728 cri.go:89] found id: ""
	I1206 10:08:08.124695  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.124705  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:08.124712  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:08.124782  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:08.150816  293728 cri.go:89] found id: ""
	I1206 10:08:08.150851  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.150860  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:08.150868  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:08.150879  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:08.207170  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:08.207203  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:08.220834  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:08.220860  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:08.285113  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:08.285138  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:08.285153  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:08.311342  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:08.311548  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:09.310714  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:08:09.371609  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:09.371709  293728 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 10:08:06.222644  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:08.722561  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:10.840228  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:10.850847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:10.850914  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:10.881439  293728 cri.go:89] found id: ""
	I1206 10:08:10.881517  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.881540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:10.881555  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:10.881629  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:10.910942  293728 cri.go:89] found id: ""
	I1206 10:08:10.910971  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.910980  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:10.910987  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:10.911049  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:10.936471  293728 cri.go:89] found id: ""
	I1206 10:08:10.936495  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.936503  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:10.936509  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:10.936566  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:10.964540  293728 cri.go:89] found id: ""
	I1206 10:08:10.964567  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.964575  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:10.964581  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:10.964650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:11.035295  293728 cri.go:89] found id: ""
	I1206 10:08:11.035322  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.035332  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:11.035354  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:11.035433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:11.081240  293728 cri.go:89] found id: ""
	I1206 10:08:11.081266  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.081275  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:11.081282  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:11.081347  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:11.109502  293728 cri.go:89] found id: ""
	I1206 10:08:11.109543  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.109554  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:11.109561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:11.109625  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:11.138072  293728 cri.go:89] found id: ""
	I1206 10:08:11.138100  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.138113  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:11.138122  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:11.138134  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:11.207996  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:11.208060  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:11.208081  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:11.234490  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:11.234525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:11.263495  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:11.263525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:11.323991  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:11.324034  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:13.838014  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:13.849112  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:13.849181  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:13.873403  293728 cri.go:89] found id: ""
	I1206 10:08:13.873472  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.873498  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:13.873515  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:13.873602  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:13.900596  293728 cri.go:89] found id: ""
	I1206 10:08:13.900616  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.900625  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:13.900631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:13.900694  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:13.925385  293728 cri.go:89] found id: ""
	I1206 10:08:13.925409  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.925417  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:13.925424  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:13.925481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:13.950796  293728 cri.go:89] found id: ""
	I1206 10:08:13.950823  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.950837  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:13.950844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:13.950902  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:14.028934  293728 cri.go:89] found id: ""
	I1206 10:08:14.028964  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.028973  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:14.028979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:14.029058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:14.063925  293728 cri.go:89] found id: ""
	I1206 10:08:14.063948  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.063957  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:14.063963  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:14.064024  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:14.091439  293728 cri.go:89] found id: ""
	I1206 10:08:14.091465  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.091473  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:14.091480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:14.091556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:14.116453  293728 cri.go:89] found id: ""
	I1206 10:08:14.116476  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.116485  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:14.116494  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:14.116506  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:14.173576  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:14.173615  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:14.187707  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:14.187736  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:14.256417  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:14.256440  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:14.256452  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:14.281458  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:14.281490  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:10.722908  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:13.223465  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:16.809300  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:16.820406  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:16.820481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:16.845040  293728 cri.go:89] found id: ""
	I1206 10:08:16.845105  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.845130  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:16.845144  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:16.845217  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:16.875450  293728 cri.go:89] found id: ""
	I1206 10:08:16.875475  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.875484  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:16.875500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:16.875562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:16.902002  293728 cri.go:89] found id: ""
	I1206 10:08:16.902048  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.902059  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:16.902068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:16.902146  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:16.927319  293728 cri.go:89] found id: ""
	I1206 10:08:16.927353  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.927361  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:16.927368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:16.927466  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:16.952239  293728 cri.go:89] found id: ""
	I1206 10:08:16.952265  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.952273  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:16.952280  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:16.952386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:16.994322  293728 cri.go:89] found id: ""
	I1206 10:08:16.994351  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.994360  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:16.994368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:16.994437  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:17.032079  293728 cri.go:89] found id: ""
	I1206 10:08:17.032113  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.032122  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:17.032128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:17.032201  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:17.079256  293728 cri.go:89] found id: ""
	I1206 10:08:17.079321  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.079343  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:17.079364  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:17.079406  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:17.104677  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:17.104707  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:17.136676  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:17.136701  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:17.195915  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:17.195950  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:17.209626  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:17.209653  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:17.278745  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:19.780767  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:19.791658  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:19.791756  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:19.820516  293728 cri.go:89] found id: ""
	I1206 10:08:19.820539  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.820547  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:19.820554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:19.820652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:19.845473  293728 cri.go:89] found id: ""
	I1206 10:08:19.845499  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.845507  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:19.845514  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:19.845572  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:19.871555  293728 cri.go:89] found id: ""
	I1206 10:08:19.871580  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.871592  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:19.871598  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:19.871658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:19.902754  293728 cri.go:89] found id: ""
	I1206 10:08:19.902778  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.902787  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:19.902793  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:19.902853  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:19.927447  293728 cri.go:89] found id: ""
	I1206 10:08:19.927473  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.927482  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:19.927489  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:19.927549  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:19.951607  293728 cri.go:89] found id: ""
	I1206 10:08:19.951634  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.951644  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:19.951651  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:19.951718  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:20.023839  293728 cri.go:89] found id: ""
	I1206 10:08:20.023868  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.023879  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:20.023886  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:20.023951  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:20.064702  293728 cri.go:89] found id: ""
	I1206 10:08:20.064730  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.064739  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:20.064748  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:20.064761  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:20.131531  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:20.131555  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:20.131566  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:20.157955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:20.157991  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:20.188100  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:20.188126  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:15.723287  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:18.223318  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:20.248399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:20.248437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:22.762476  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:22.774338  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:22.774408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:22.803197  293728 cri.go:89] found id: ""
	I1206 10:08:22.803220  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.803228  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:22.803234  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:22.803292  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:22.828985  293728 cri.go:89] found id: ""
	I1206 10:08:22.829009  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.829018  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:22.829024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:22.829084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:22.857670  293728 cri.go:89] found id: ""
	I1206 10:08:22.857695  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.857704  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:22.857710  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:22.857770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:22.886863  293728 cri.go:89] found id: ""
	I1206 10:08:22.886889  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.886898  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:22.886905  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:22.886967  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:22.912046  293728 cri.go:89] found id: ""
	I1206 10:08:22.912072  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.912080  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:22.912086  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:22.912149  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:22.940438  293728 cri.go:89] found id: ""
	I1206 10:08:22.940516  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.940530  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:22.940538  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:22.940597  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:22.965932  293728 cri.go:89] found id: ""
	I1206 10:08:22.965957  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.965966  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:22.965973  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:22.966034  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:23.036167  293728 cri.go:89] found id: ""
	I1206 10:08:23.036194  293728 logs.go:282] 0 containers: []
	W1206 10:08:23.036203  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:23.036212  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:23.036224  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:23.054454  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:23.054481  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:23.120660  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:23.120680  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:23.120692  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:23.146879  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:23.146913  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:23.177356  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:23.177389  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:20.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:23.222550  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:25.739842  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:25.751155  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:25.751238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:25.781790  293728 cri.go:89] found id: ""
	I1206 10:08:25.781813  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.781821  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:25.781828  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:25.781884  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:25.809915  293728 cri.go:89] found id: ""
	I1206 10:08:25.809940  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.809948  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:25.809954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:25.810014  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:25.840293  293728 cri.go:89] found id: ""
	I1206 10:08:25.840318  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.840327  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:25.840334  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:25.840390  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:25.869368  293728 cri.go:89] found id: ""
	I1206 10:08:25.869401  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.869410  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:25.869416  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:25.869488  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:25.898302  293728 cri.go:89] found id: ""
	I1206 10:08:25.898335  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.898344  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:25.898351  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:25.898417  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:25.925837  293728 cri.go:89] found id: ""
	I1206 10:08:25.925864  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.925873  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:25.925880  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:25.925940  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:25.950501  293728 cri.go:89] found id: ""
	I1206 10:08:25.950537  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.950546  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:25.950552  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:25.950618  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:26.003264  293728 cri.go:89] found id: ""
	I1206 10:08:26.003294  293728 logs.go:282] 0 containers: []
	W1206 10:08:26.003305  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:26.003316  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:26.003327  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:26.046472  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:26.046503  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:26.091770  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:26.091798  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:26.148719  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:26.148755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:26.165689  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:26.165733  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:26.231230  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:26.245490  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:08:26.310812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:26.310914  293728 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:26.314238  293728 out.go:179] * Enabled addons: 
	I1206 10:08:26.317143  293728 addons.go:530] duration metric: took 1m53.881766525s for enable addons: enabled=[]
	I1206 10:08:28.731518  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:28.742380  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:28.742460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:28.768392  293728 cri.go:89] found id: ""
	I1206 10:08:28.768416  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.768425  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:28.768431  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:28.768489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:28.795017  293728 cri.go:89] found id: ""
	I1206 10:08:28.795043  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.795052  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:28.795059  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:28.795130  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:28.831707  293728 cri.go:89] found id: ""
	I1206 10:08:28.831734  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.831742  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:28.831748  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:28.831807  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:28.857267  293728 cri.go:89] found id: ""
	I1206 10:08:28.857293  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.857304  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:28.857317  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:28.857415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:28.887732  293728 cri.go:89] found id: ""
	I1206 10:08:28.887754  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.887762  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:28.887769  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:28.887827  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:28.912905  293728 cri.go:89] found id: ""
	I1206 10:08:28.912970  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.912984  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:28.912992  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:28.913051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:28.937740  293728 cri.go:89] found id: ""
	I1206 10:08:28.937764  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.937774  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:28.937781  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:28.937840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:28.964042  293728 cri.go:89] found id: ""
	I1206 10:08:28.964111  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.964126  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:28.964135  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:28.964147  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:29.034399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:29.034439  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:29.059150  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:29.059176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:29.134200  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:29.134222  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:29.134235  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:29.160868  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:29.160901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:25.722683  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:27.723593  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:30.222645  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:31.689201  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:31.700497  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:31.700569  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:31.726402  293728 cri.go:89] found id: ""
	I1206 10:08:31.726426  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.726434  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:31.726441  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:31.726503  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:31.752620  293728 cri.go:89] found id: ""
	I1206 10:08:31.752644  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.752652  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:31.752659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:31.752720  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:31.778722  293728 cri.go:89] found id: ""
	I1206 10:08:31.778749  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.778758  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:31.778764  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:31.778825  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:31.804730  293728 cri.go:89] found id: ""
	I1206 10:08:31.804754  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.804762  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:31.804768  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:31.804828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:31.834276  293728 cri.go:89] found id: ""
	I1206 10:08:31.834303  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.834312  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:31.834322  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:31.834388  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:31.859721  293728 cri.go:89] found id: ""
	I1206 10:08:31.859744  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.859752  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:31.859759  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:31.859889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:31.888679  293728 cri.go:89] found id: ""
	I1206 10:08:31.888746  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.888760  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:31.888767  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:31.888828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:31.915769  293728 cri.go:89] found id: ""
	I1206 10:08:31.915794  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.915804  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:31.915812  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:31.915825  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:31.929129  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:31.929155  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:32.017380  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:32.017406  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:32.017420  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:32.046135  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:32.046218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:32.081462  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:32.081485  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.642406  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:34.653187  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:34.653263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:34.683091  293728 cri.go:89] found id: ""
	I1206 10:08:34.683116  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.683124  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:34.683130  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:34.683189  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:34.709426  293728 cri.go:89] found id: ""
	I1206 10:08:34.709453  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.709462  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:34.709468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:34.709528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:34.740189  293728 cri.go:89] found id: ""
	I1206 10:08:34.740215  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.740223  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:34.740230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:34.740289  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:34.769902  293728 cri.go:89] found id: ""
	I1206 10:08:34.769932  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.769942  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:34.769954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:34.770026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:34.797331  293728 cri.go:89] found id: ""
	I1206 10:08:34.797358  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.797367  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:34.797374  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:34.797434  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:34.823286  293728 cri.go:89] found id: ""
	I1206 10:08:34.823309  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.823318  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:34.823324  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:34.823406  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:34.849130  293728 cri.go:89] found id: ""
	I1206 10:08:34.849153  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.849162  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:34.849168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:34.849229  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:34.873883  293728 cri.go:89] found id: ""
	I1206 10:08:34.873905  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.873913  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:34.873922  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:34.873933  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.929942  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:34.929976  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:34.944124  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:34.944205  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:35.057155  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:35.057180  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:35.057193  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:35.090699  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:35.090741  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:32.223260  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:34.723506  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:37.620713  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:37.631409  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:37.631478  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:37.668926  293728 cri.go:89] found id: ""
	I1206 10:08:37.668949  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.668958  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:37.668966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:37.669025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:37.698809  293728 cri.go:89] found id: ""
	I1206 10:08:37.698831  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.698840  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:37.698846  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:37.698905  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:37.726123  293728 cri.go:89] found id: ""
	I1206 10:08:37.726146  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.726155  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:37.726161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:37.726219  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:37.750745  293728 cri.go:89] found id: ""
	I1206 10:08:37.750818  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.750842  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:37.750861  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:37.750945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:37.777744  293728 cri.go:89] found id: ""
	I1206 10:08:37.777814  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.777837  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:37.777857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:37.777945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:37.804124  293728 cri.go:89] found id: ""
	I1206 10:08:37.804151  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.804160  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:37.804166  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:37.804243  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:37.828930  293728 cri.go:89] found id: ""
	I1206 10:08:37.828995  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.829010  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:37.829017  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:37.829076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:37.853436  293728 cri.go:89] found id: ""
	I1206 10:08:37.853459  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.853468  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:37.853476  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:37.853493  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:37.910673  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:37.910709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:37.926464  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:37.926504  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:38.046192  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:38.046217  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:38.046230  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:38.078770  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:38.078805  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:37.222544  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:39.222587  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:40.613605  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:40.624180  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:40.624256  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:40.648680  293728 cri.go:89] found id: ""
	I1206 10:08:40.648706  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.648715  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:40.648721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:40.648783  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:40.674691  293728 cri.go:89] found id: ""
	I1206 10:08:40.674716  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.674725  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:40.674732  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:40.674802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:40.700970  293728 cri.go:89] found id: ""
	I1206 10:08:40.700997  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.701006  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:40.701013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:40.701076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:40.729911  293728 cri.go:89] found id: ""
	I1206 10:08:40.729940  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.729949  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:40.729956  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:40.730020  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:40.755581  293728 cri.go:89] found id: ""
	I1206 10:08:40.755611  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.755620  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:40.755626  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:40.755686  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:40.781938  293728 cri.go:89] found id: ""
	I1206 10:08:40.782007  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.782030  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:40.782051  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:40.782139  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:40.811855  293728 cri.go:89] found id: ""
	I1206 10:08:40.811880  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.811889  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:40.811895  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:40.811961  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:40.841527  293728 cri.go:89] found id: ""
	I1206 10:08:40.841553  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.841562  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:40.841571  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:40.841583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:40.854956  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:40.854983  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:40.924783  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:40.924807  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:40.924823  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:40.950611  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:40.950646  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:41.021978  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:41.022008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:43.596447  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:43.607463  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:43.607540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:43.632638  293728 cri.go:89] found id: ""
	I1206 10:08:43.632660  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.632668  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:43.632675  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:43.632737  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:43.657538  293728 cri.go:89] found id: ""
	I1206 10:08:43.657616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.657632  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:43.657639  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:43.657711  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:43.683595  293728 cri.go:89] found id: ""
	I1206 10:08:43.683621  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.683630  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:43.683636  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:43.683706  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:43.709348  293728 cri.go:89] found id: ""
	I1206 10:08:43.709371  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.709380  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:43.709387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:43.709451  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:43.734592  293728 cri.go:89] found id: ""
	I1206 10:08:43.734616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.734625  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:43.734631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:43.734689  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:43.761297  293728 cri.go:89] found id: ""
	I1206 10:08:43.761362  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.761387  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:43.761405  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:43.761493  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:43.789795  293728 cri.go:89] found id: ""
	I1206 10:08:43.789831  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.789840  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:43.789847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:43.789919  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:43.817708  293728 cri.go:89] found id: ""
	I1206 10:08:43.817735  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.817744  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:43.817762  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:43.817774  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:43.831448  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:43.831483  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:43.897033  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:43.897107  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:43.897131  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:43.922955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:43.922990  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:43.960423  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:43.960457  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:41.722543  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:43.723229  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:46.534389  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:46.545120  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:46.545205  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:46.570287  293728 cri.go:89] found id: ""
	I1206 10:08:46.570313  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.570322  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:46.570328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:46.570391  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:46.600524  293728 cri.go:89] found id: ""
	I1206 10:08:46.600609  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.600631  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:46.600650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:46.600734  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:46.627292  293728 cri.go:89] found id: ""
	I1206 10:08:46.627314  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.627322  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:46.627328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:46.627424  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:46.652620  293728 cri.go:89] found id: ""
	I1206 10:08:46.652642  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.652651  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:46.652657  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:46.652716  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:46.681992  293728 cri.go:89] found id: ""
	I1206 10:08:46.682015  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.682023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:46.682029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:46.682087  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:46.708290  293728 cri.go:89] found id: ""
	I1206 10:08:46.708363  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.708408  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:46.708434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:46.708528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:46.737816  293728 cri.go:89] found id: ""
	I1206 10:08:46.737890  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.737915  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:46.737935  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:46.738021  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:46.768334  293728 cri.go:89] found id: ""
	I1206 10:08:46.768407  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.768430  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:46.768451  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:46.768491  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:46.782268  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:46.782344  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:46.850687  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:46.850714  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:46.850727  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:46.877310  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:46.877362  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:46.909345  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:46.909376  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:49.467346  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:49.477899  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:49.477971  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:49.502546  293728 cri.go:89] found id: ""
	I1206 10:08:49.502569  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.502578  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:49.502584  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:49.502646  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:49.527592  293728 cri.go:89] found id: ""
	I1206 10:08:49.527663  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.527686  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:49.527699  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:49.527760  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:49.553748  293728 cri.go:89] found id: ""
	I1206 10:08:49.553770  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.553778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:49.553784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:49.553841  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:49.580182  293728 cri.go:89] found id: ""
	I1206 10:08:49.580205  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.580214  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:49.580220  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:49.580285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:49.609009  293728 cri.go:89] found id: ""
	I1206 10:08:49.609034  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.609043  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:49.609050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:49.609114  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:49.634196  293728 cri.go:89] found id: ""
	I1206 10:08:49.634218  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.634227  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:49.634233  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:49.634293  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:49.660015  293728 cri.go:89] found id: ""
	I1206 10:08:49.660038  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.660047  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:49.660053  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:49.660115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:49.685329  293728 cri.go:89] found id: ""
	I1206 10:08:49.685355  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.685364  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:49.685373  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:49.685385  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:49.699189  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:49.699218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:49.768229  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:49.768253  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:49.768267  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:49.794221  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:49.794255  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:49.825320  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:49.825349  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:46.222859  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:48.223148  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:50.223492  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:52.381962  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:52.392897  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:52.392974  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:52.421172  293728 cri.go:89] found id: ""
	I1206 10:08:52.421197  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.421206  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:52.421212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:52.421276  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:52.449281  293728 cri.go:89] found id: ""
	I1206 10:08:52.449305  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.449313  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:52.449320  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:52.449378  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:52.474517  293728 cri.go:89] found id: ""
	I1206 10:08:52.474539  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.474547  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:52.474553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:52.474616  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:52.500435  293728 cri.go:89] found id: ""
	I1206 10:08:52.500458  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.500466  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:52.500473  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:52.500532  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:52.526935  293728 cri.go:89] found id: ""
	I1206 10:08:52.526957  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.526965  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:52.526972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:52.527031  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:52.553625  293728 cri.go:89] found id: ""
	I1206 10:08:52.553646  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.553654  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:52.553663  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:52.553721  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:52.580092  293728 cri.go:89] found id: ""
	I1206 10:08:52.580169  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.580194  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:52.580206  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:52.580269  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:52.609595  293728 cri.go:89] found id: ""
	I1206 10:08:52.609622  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.609631  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:52.609640  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:52.609658  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:52.666423  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:52.666460  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:52.680542  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:52.680572  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:52.745123  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:52.745142  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:52.745154  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:52.771578  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:52.771612  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:52.722479  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:54.722588  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:57.222560  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:58.722277  287962 node_ready.go:38] duration metric: took 6m0.000230261s for node "no-preload-257359" to be "Ready" ...
	I1206 10:08:58.725649  287962 out.go:203] 
	W1206 10:08:58.728547  287962 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:08:58.728572  287962 out.go:285] * 
	W1206 10:08:58.730704  287962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:08:58.733695  287962 out.go:203] 
	I1206 10:08:55.300596  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:55.311733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:55.311837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:55.337436  293728 cri.go:89] found id: ""
	I1206 10:08:55.337466  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.337475  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:55.337482  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:55.337557  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:55.362426  293728 cri.go:89] found id: ""
	I1206 10:08:55.362449  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.362457  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:55.362462  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:55.362539  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:55.388462  293728 cri.go:89] found id: ""
	I1206 10:08:55.388488  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.388497  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:55.388503  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:55.388567  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:55.417368  293728 cri.go:89] found id: ""
	I1206 10:08:55.417391  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.417400  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:55.417406  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:55.417465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:55.444014  293728 cri.go:89] found id: ""
	I1206 10:08:55.444052  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.444061  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:55.444067  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:55.444126  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:55.473384  293728 cri.go:89] found id: ""
	I1206 10:08:55.473408  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.473417  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:55.473423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:55.473485  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:55.499095  293728 cri.go:89] found id: ""
	I1206 10:08:55.499119  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.499128  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:55.499134  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:55.499193  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:55.530488  293728 cri.go:89] found id: ""
	I1206 10:08:55.530560  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.530585  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:55.530607  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:55.530642  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:55.543996  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:55.544023  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:55.609232  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:55.600433    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.601179    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.602847    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.603477    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.605074    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:55.600433    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.601179    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.602847    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.603477    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.605074    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:55.609295  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:55.609315  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:55.635259  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:55.635292  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:55.663234  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:55.663263  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:58.219942  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:58.240184  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:58.240251  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:58.288171  293728 cri.go:89] found id: ""
	I1206 10:08:58.288193  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.288201  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:58.288208  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:58.288267  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:58.326999  293728 cri.go:89] found id: ""
	I1206 10:08:58.327020  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.327029  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:58.327035  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:58.327104  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:58.354289  293728 cri.go:89] found id: ""
	I1206 10:08:58.354316  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.354325  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:58.354331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:58.354392  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:58.378166  293728 cri.go:89] found id: ""
	I1206 10:08:58.378195  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.378204  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:58.378210  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:58.378270  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:58.405700  293728 cri.go:89] found id: ""
	I1206 10:08:58.405721  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.405734  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:58.405740  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:58.405800  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:58.430772  293728 cri.go:89] found id: ""
	I1206 10:08:58.430800  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.430809  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:58.430816  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:58.430882  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:58.455749  293728 cri.go:89] found id: ""
	I1206 10:08:58.455777  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.455787  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:58.455793  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:58.455854  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:58.480448  293728 cri.go:89] found id: ""
	I1206 10:08:58.480491  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.480502  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:58.480512  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:58.480527  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:58.536659  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:58.536697  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:58.550566  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:58.550589  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:58.618059  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:58.608926    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.609448    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.611304    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.612003    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.613723    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:58.608926    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.609448    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.611304    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.612003    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.613723    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:58.618081  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:58.618093  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:58.643111  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:58.643142  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:01.172811  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:01.189894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:01.189970  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:01.216506  293728 cri.go:89] found id: ""
	I1206 10:09:01.216533  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.216542  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:01.216549  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:01.216610  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:01.248643  293728 cri.go:89] found id: ""
	I1206 10:09:01.248667  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.248675  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:01.248681  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:01.248754  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:01.282778  293728 cri.go:89] found id: ""
	I1206 10:09:01.282799  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.282808  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:01.282814  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:01.282874  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:01.317892  293728 cri.go:89] found id: ""
	I1206 10:09:01.317914  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.317923  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:01.317929  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:01.317996  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:01.344569  293728 cri.go:89] found id: ""
	I1206 10:09:01.344596  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.344606  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:01.344612  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:01.344675  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:01.374785  293728 cri.go:89] found id: ""
	I1206 10:09:01.374812  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.374822  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:01.374829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:01.374913  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:01.399962  293728 cri.go:89] found id: ""
	I1206 10:09:01.399986  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.399995  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:01.400001  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:01.400120  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:01.426824  293728 cri.go:89] found id: ""
	I1206 10:09:01.426850  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.426859  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:01.426877  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:01.426904  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:01.484968  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:01.485001  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:01.506470  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:01.506550  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:01.586157  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:01.577286    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.578043    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.579813    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.580524    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.582153    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:01.577286    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.578043    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.579813    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.580524    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.582153    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:01.586226  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:01.586241  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:01.616859  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:01.617050  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:04.147855  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:04.161529  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:04.161601  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:04.185793  293728 cri.go:89] found id: ""
	I1206 10:09:04.185817  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.185826  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:04.185832  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:04.185893  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:04.213785  293728 cri.go:89] found id: ""
	I1206 10:09:04.213809  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.213818  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:04.213824  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:04.213886  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:04.245746  293728 cri.go:89] found id: ""
	I1206 10:09:04.245769  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.245778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:04.245784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:04.245844  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:04.276836  293728 cri.go:89] found id: ""
	I1206 10:09:04.276864  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.276873  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:04.276879  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:04.276949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:04.307027  293728 cri.go:89] found id: ""
	I1206 10:09:04.307054  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.307089  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:04.307096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:04.307171  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:04.332480  293728 cri.go:89] found id: ""
	I1206 10:09:04.332503  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.332511  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:04.332518  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:04.332580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:04.359083  293728 cri.go:89] found id: ""
	I1206 10:09:04.359105  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.359113  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:04.359119  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:04.359178  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:04.384459  293728 cri.go:89] found id: ""
	I1206 10:09:04.384527  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.384560  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:04.384576  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:04.384589  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:04.398476  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:04.398508  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:04.464529  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:04.455141    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.455968    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.457782    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.458361    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.459895    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:04.455141    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.455968    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.457782    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.458361    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.459895    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:04.464551  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:04.464564  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:04.493800  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:04.493842  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:04.533422  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:04.533455  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:07.095340  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:07.106226  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:07.106321  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:07.133785  293728 cri.go:89] found id: ""
	I1206 10:09:07.133849  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.133886  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:07.133907  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:07.133972  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:07.169905  293728 cri.go:89] found id: ""
	I1206 10:09:07.169932  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.169957  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:07.169964  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:07.170039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:07.198212  293728 cri.go:89] found id: ""
	I1206 10:09:07.198285  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.198309  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:07.198329  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:07.198499  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:07.236730  293728 cri.go:89] found id: ""
	I1206 10:09:07.236809  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.236842  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:07.236862  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:07.236969  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:07.264908  293728 cri.go:89] found id: ""
	I1206 10:09:07.264984  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.265015  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:07.265037  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:07.265147  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:07.293030  293728 cri.go:89] found id: ""
	I1206 10:09:07.293102  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.293125  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:07.293146  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:07.293253  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:07.320479  293728 cri.go:89] found id: ""
	I1206 10:09:07.320542  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.320572  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:07.320600  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:07.320712  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:07.346369  293728 cri.go:89] found id: ""
	I1206 10:09:07.346431  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.346461  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:07.346486  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:07.346524  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:07.375165  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:07.375244  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:07.433189  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:07.433225  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:07.447472  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:07.447500  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:07.536150  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:07.524233    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.525315    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527184    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527855    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.532128    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:07.524233    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.525315    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527184    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527855    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.532128    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:07.536173  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:07.536186  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:10.062333  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:10.073694  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:10.073767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:10.101307  293728 cri.go:89] found id: ""
	I1206 10:09:10.101330  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.101339  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:10.101346  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:10.101413  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:10.128394  293728 cri.go:89] found id: ""
	I1206 10:09:10.128420  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.128428  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:10.128436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:10.128497  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:10.154510  293728 cri.go:89] found id: ""
	I1206 10:09:10.154536  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.154545  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:10.154552  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:10.154611  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:10.179782  293728 cri.go:89] found id: ""
	I1206 10:09:10.179808  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.179816  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:10.179822  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:10.179888  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:10.210072  293728 cri.go:89] found id: ""
	I1206 10:09:10.210142  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.210171  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:10.210201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:10.210315  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:10.245657  293728 cri.go:89] found id: ""
	I1206 10:09:10.245676  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.245684  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:10.245691  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:10.245748  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:10.282232  293728 cri.go:89] found id: ""
	I1206 10:09:10.282305  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.282345  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:10.282365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:10.282454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:10.313160  293728 cri.go:89] found id: ""
	I1206 10:09:10.313225  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.313239  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:10.313249  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:10.313261  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:10.373196  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:10.373230  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:10.386792  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:10.386819  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:10.450525  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:10.442280    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.442968    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.444545    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.445040    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.446664    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:10.442280    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.442968    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.444545    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.445040    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.446664    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:10.450547  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:10.450560  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:10.476832  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:10.476869  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:13.012652  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:13.023659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:13.023732  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:13.047365  293728 cri.go:89] found id: ""
	I1206 10:09:13.047458  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.047473  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:13.047480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:13.047541  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:13.072937  293728 cri.go:89] found id: ""
	I1206 10:09:13.072961  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.072970  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:13.072987  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:13.073048  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:13.097439  293728 cri.go:89] found id: ""
	I1206 10:09:13.097515  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.097531  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:13.097539  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:13.097600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:13.123273  293728 cri.go:89] found id: ""
	I1206 10:09:13.123307  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.123316  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:13.123323  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:13.123426  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:13.149441  293728 cri.go:89] found id: ""
	I1206 10:09:13.149518  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.149534  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:13.149542  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:13.149608  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:13.174275  293728 cri.go:89] found id: ""
	I1206 10:09:13.174298  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.174306  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:13.174313  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:13.174379  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:13.203852  293728 cri.go:89] found id: ""
	I1206 10:09:13.203926  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.203942  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:13.203951  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:13.204013  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:13.237842  293728 cri.go:89] found id: ""
	I1206 10:09:13.237866  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.237875  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:13.237884  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:13.237899  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:13.305042  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:13.305078  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:13.319151  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:13.319178  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:13.383092  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:13.374391    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.375129    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.376927    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.377619    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.379235    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:13.374391    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.375129    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.376927    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.377619    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.379235    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:13.383112  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:13.383123  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:13.409266  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:13.409295  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:15.937340  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:15.948165  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:15.948296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:15.973427  293728 cri.go:89] found id: ""
	I1206 10:09:15.973452  293728 logs.go:282] 0 containers: []
	W1206 10:09:15.973461  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:15.973467  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:15.973529  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:16.006761  293728 cri.go:89] found id: ""
	I1206 10:09:16.006806  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.006816  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:16.006824  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:16.006907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:16.034447  293728 cri.go:89] found id: ""
	I1206 10:09:16.034483  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.034492  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:16.034499  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:16.034572  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:16.060884  293728 cri.go:89] found id: ""
	I1206 10:09:16.060955  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.060972  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:16.060979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:16.061039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:16.090437  293728 cri.go:89] found id: ""
	I1206 10:09:16.090461  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.090470  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:16.090476  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:16.090548  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:16.118175  293728 cri.go:89] found id: ""
	I1206 10:09:16.118201  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.118209  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:16.118222  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:16.118342  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:16.144978  293728 cri.go:89] found id: ""
	I1206 10:09:16.145005  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.145015  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:16.145021  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:16.145083  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:16.169350  293728 cri.go:89] found id: ""
	I1206 10:09:16.169378  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.169392  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:16.169401  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:16.169412  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:16.228680  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:16.228755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:16.243103  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:16.243179  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:16.316618  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:16.307974    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.308682    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310238    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310832    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.312513    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:16.307974    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.308682    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310238    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310832    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.312513    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:16.316645  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:16.316658  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:16.342620  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:16.342651  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:18.872579  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:18.883111  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:18.883184  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:18.909365  293728 cri.go:89] found id: ""
	I1206 10:09:18.909393  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.909402  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:18.909410  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:18.909480  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:18.933714  293728 cri.go:89] found id: ""
	I1206 10:09:18.933737  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.933746  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:18.933752  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:18.933811  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:18.963141  293728 cri.go:89] found id: ""
	I1206 10:09:18.963206  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.963228  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:18.963245  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:18.963333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:18.988486  293728 cri.go:89] found id: ""
	I1206 10:09:18.988511  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.988519  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:18.988526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:18.988604  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:19.020422  293728 cri.go:89] found id: ""
	I1206 10:09:19.020448  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.020456  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:19.020463  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:19.020543  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:19.045103  293728 cri.go:89] found id: ""
	I1206 10:09:19.045164  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.045179  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:19.045186  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:19.045245  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:19.069289  293728 cri.go:89] found id: ""
	I1206 10:09:19.069322  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.069331  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:19.069337  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:19.069403  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:19.094504  293728 cri.go:89] found id: ""
	I1206 10:09:19.094539  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.094547  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:19.094557  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:19.094569  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:19.108440  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:19.108469  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:19.175508  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:19.166822    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.167472    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169260    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169788    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.171507    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:19.166822    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.167472    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169260    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169788    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.171507    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:19.175529  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:19.175542  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:19.201390  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:19.201424  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:19.243342  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:19.243364  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:21.808230  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:21.818876  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:21.818955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:21.848634  293728 cri.go:89] found id: ""
	I1206 10:09:21.848655  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.848663  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:21.848669  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:21.848728  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:21.872798  293728 cri.go:89] found id: ""
	I1206 10:09:21.872861  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.872875  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:21.872882  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:21.872938  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:21.900148  293728 cri.go:89] found id: ""
	I1206 10:09:21.900174  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.900183  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:21.900190  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:21.900250  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:21.924786  293728 cri.go:89] found id: ""
	I1206 10:09:21.924813  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.924822  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:21.924829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:21.924915  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:21.954178  293728 cri.go:89] found id: ""
	I1206 10:09:21.954212  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.954221  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:21.954227  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:21.954296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:21.979818  293728 cri.go:89] found id: ""
	I1206 10:09:21.979842  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.979850  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:21.979857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:21.979916  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:22.008409  293728 cri.go:89] found id: ""
	I1206 10:09:22.008435  293728 logs.go:282] 0 containers: []
	W1206 10:09:22.008445  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:22.008452  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:22.008527  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:22.035338  293728 cri.go:89] found id: ""
	I1206 10:09:22.035363  293728 logs.go:282] 0 containers: []
	W1206 10:09:22.035396  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:22.035407  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:22.035418  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:22.091435  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:22.091472  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:22.105532  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:22.105567  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:22.171773  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:22.163104    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.163868    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.165557    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.166181    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.167828    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:22.163104    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.163868    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.165557    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.166181    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.167828    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:22.171793  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:22.171806  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:22.197667  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:22.197706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:24.735529  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:24.748375  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:24.748558  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:24.788906  293728 cri.go:89] found id: ""
	I1206 10:09:24.788978  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.789002  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:24.789024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:24.789113  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:24.818364  293728 cri.go:89] found id: ""
	I1206 10:09:24.818431  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.818453  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:24.818472  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:24.818564  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:24.845760  293728 cri.go:89] found id: ""
	I1206 10:09:24.845802  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.845811  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:24.845817  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:24.845889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:24.872973  293728 cri.go:89] found id: ""
	I1206 10:09:24.872997  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.873006  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:24.873012  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:24.873076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:24.902758  293728 cri.go:89] found id: ""
	I1206 10:09:24.902791  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.902801  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:24.902809  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:24.902885  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:24.929539  293728 cri.go:89] found id: ""
	I1206 10:09:24.929565  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.929575  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:24.929582  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:24.929665  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:24.955731  293728 cri.go:89] found id: ""
	I1206 10:09:24.955806  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.955822  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:24.955829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:24.955891  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:24.980673  293728 cri.go:89] found id: ""
	I1206 10:09:24.980704  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.980713  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:24.980722  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:24.980734  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:25.017868  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:25.017899  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:25.077472  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:25.077510  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:25.093107  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:25.093139  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:25.164597  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:25.155645    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.156390    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158149    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158952    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.160572    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:25.155645    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.156390    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158149    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158952    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.160572    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:25.164635  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:25.164649  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:27.694118  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:27.704932  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:27.705013  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:27.734684  293728 cri.go:89] found id: ""
	I1206 10:09:27.734762  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.734784  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:27.734802  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:27.734892  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:27.771355  293728 cri.go:89] found id: ""
	I1206 10:09:27.771442  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.771466  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:27.771485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:27.771568  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:27.800742  293728 cri.go:89] found id: ""
	I1206 10:09:27.800818  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.800836  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:27.800844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:27.800907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:27.827029  293728 cri.go:89] found id: ""
	I1206 10:09:27.827058  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.827068  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:27.827075  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:27.827136  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:27.853299  293728 cri.go:89] found id: ""
	I1206 10:09:27.853323  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.853332  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:27.853339  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:27.853431  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:27.878371  293728 cri.go:89] found id: ""
	I1206 10:09:27.878394  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.878402  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:27.878415  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:27.878525  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:27.903247  293728 cri.go:89] found id: ""
	I1206 10:09:27.903269  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.903277  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:27.903283  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:27.903405  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:27.927665  293728 cri.go:89] found id: ""
	I1206 10:09:27.927687  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.927695  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:27.927703  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:27.927714  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:27.993787  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:27.984910    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.985739    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.987460    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.988125    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.989907    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:27.984910    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.985739    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.987460    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.988125    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.989907    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:27.993808  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:27.993820  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:28.021097  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:28.021132  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:28.050410  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:28.050438  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:28.108602  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:28.108636  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:30.622836  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:30.633282  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:30.633354  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:30.657827  293728 cri.go:89] found id: ""
	I1206 10:09:30.657850  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.657859  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:30.657865  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:30.657929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:30.685495  293728 cri.go:89] found id: ""
	I1206 10:09:30.685525  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.685534  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:30.685541  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:30.685611  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:30.710543  293728 cri.go:89] found id: ""
	I1206 10:09:30.710576  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.710585  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:30.710591  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:30.710661  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:30.738572  293728 cri.go:89] found id: ""
	I1206 10:09:30.738667  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.738690  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:30.738710  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:30.738815  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:30.782603  293728 cri.go:89] found id: ""
	I1206 10:09:30.782684  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.782706  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:30.782725  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:30.782829  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:30.810264  293728 cri.go:89] found id: ""
	I1206 10:09:30.810342  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.810364  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:30.810387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:30.810479  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:30.835864  293728 cri.go:89] found id: ""
	I1206 10:09:30.835944  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.835960  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:30.835968  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:30.836050  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:30.860832  293728 cri.go:89] found id: ""
	I1206 10:09:30.860858  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.860867  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:30.860876  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:30.860887  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:30.917397  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:30.917433  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:30.931490  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:30.931572  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:31.004606  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:30.993339    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.994064    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.995768    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.996292    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.997903    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:30.993339    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.994064    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.995768    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.996292    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.997903    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:31.004692  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:31.004725  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:31.033130  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:31.033168  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:33.563282  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:33.574558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:33.574631  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:33.600752  293728 cri.go:89] found id: ""
	I1206 10:09:33.600784  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.600797  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:33.600804  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:33.600876  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:33.626879  293728 cri.go:89] found id: ""
	I1206 10:09:33.626909  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.626919  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:33.626925  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:33.626987  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:33.652921  293728 cri.go:89] found id: ""
	I1206 10:09:33.652945  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.652954  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:33.652960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:33.653025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:33.678584  293728 cri.go:89] found id: ""
	I1206 10:09:33.678619  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.678627  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:33.678634  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:33.678704  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:33.706401  293728 cri.go:89] found id: ""
	I1206 10:09:33.706424  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.706433  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:33.706439  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:33.706514  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:33.754300  293728 cri.go:89] found id: ""
	I1206 10:09:33.754326  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.754334  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:33.754341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:33.754410  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:33.782351  293728 cri.go:89] found id: ""
	I1206 10:09:33.782388  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.782397  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:33.782410  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:33.782479  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:33.809362  293728 cri.go:89] found id: ""
	I1206 10:09:33.809399  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.809407  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:33.809417  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:33.809428  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:33.845485  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:33.845510  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:33.902066  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:33.902106  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:33.915843  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:33.915871  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:33.983566  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:33.974999    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.975872    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977595    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977932    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.979555    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:33.974999    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.975872    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977595    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977932    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.979555    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:33.983589  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:33.983610  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:36.512857  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:36.524687  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:36.524752  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:36.559537  293728 cri.go:89] found id: ""
	I1206 10:09:36.559559  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.559568  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:36.559574  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:36.559641  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:36.584964  293728 cri.go:89] found id: ""
	I1206 10:09:36.585033  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.585049  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:36.585056  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:36.585124  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:36.610724  293728 cri.go:89] found id: ""
	I1206 10:09:36.610750  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.610759  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:36.610765  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:36.610824  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:36.641090  293728 cri.go:89] found id: ""
	I1206 10:09:36.641158  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.641185  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:36.641198  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:36.641287  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:36.665900  293728 cri.go:89] found id: ""
	I1206 10:09:36.665926  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.665935  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:36.665941  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:36.666004  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:36.693620  293728 cri.go:89] found id: ""
	I1206 10:09:36.693650  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.693659  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:36.693666  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:36.693731  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:36.734543  293728 cri.go:89] found id: ""
	I1206 10:09:36.734621  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.734646  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:36.734665  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:36.734757  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:36.776081  293728 cri.go:89] found id: ""
	I1206 10:09:36.776146  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.776168  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:36.776188  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:36.776226  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:36.792679  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:36.792711  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:36.861792  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:36.852688    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.853295    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855348    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855858    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.857501    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:36.852688    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.853295    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855348    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855858    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.857501    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:36.861815  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:36.861828  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:36.887686  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:36.887722  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:36.915203  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:36.915229  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:39.473166  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:39.484986  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:39.485070  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:39.519022  293728 cri.go:89] found id: ""
	I1206 10:09:39.519084  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.519097  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:39.519105  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:39.519183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:39.550949  293728 cri.go:89] found id: ""
	I1206 10:09:39.550987  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.551002  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:39.551009  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:39.551083  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:39.576090  293728 cri.go:89] found id: ""
	I1206 10:09:39.576120  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.576129  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:39.576136  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:39.576199  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:39.602338  293728 cri.go:89] found id: ""
	I1206 10:09:39.602364  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.602374  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:39.602386  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:39.602447  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:39.627803  293728 cri.go:89] found id: ""
	I1206 10:09:39.627841  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.627850  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:39.627857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:39.627929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:39.653348  293728 cri.go:89] found id: ""
	I1206 10:09:39.653376  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.653385  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:39.653392  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:39.653454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:39.679324  293728 cri.go:89] found id: ""
	I1206 10:09:39.679418  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.679434  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:39.679442  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:39.679515  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:39.704684  293728 cri.go:89] found id: ""
	I1206 10:09:39.704708  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.704717  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:39.704726  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:39.704738  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:39.764873  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:39.764905  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:39.779533  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:39.779558  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:39.852807  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:39.844502    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.845176    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.846778    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.847166    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.848722    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:39.844502    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.845176    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.846778    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.847166    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.848722    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:39.852829  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:39.852842  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:39.879753  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:39.879787  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:42.409609  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:42.421328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:42.421397  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:42.447308  293728 cri.go:89] found id: ""
	I1206 10:09:42.447333  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.447342  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:42.447349  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:42.447440  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:42.481946  293728 cri.go:89] found id: ""
	I1206 10:09:42.481977  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.481985  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:42.481992  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:42.482055  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:42.514307  293728 cri.go:89] found id: ""
	I1206 10:09:42.514378  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.514401  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:42.514420  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:42.514512  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:42.546780  293728 cri.go:89] found id: ""
	I1206 10:09:42.546806  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.546815  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:42.546822  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:42.546891  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:42.573407  293728 cri.go:89] found id: ""
	I1206 10:09:42.573430  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.573439  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:42.573445  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:42.573501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:42.599133  293728 cri.go:89] found id: ""
	I1206 10:09:42.599156  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.599164  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:42.599171  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:42.599233  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:42.625000  293728 cri.go:89] found id: ""
	I1206 10:09:42.625028  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.625037  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:42.625043  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:42.625107  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:42.654408  293728 cri.go:89] found id: ""
	I1206 10:09:42.654436  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.654446  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:42.654455  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:42.654467  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:42.711699  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:42.711733  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:42.727806  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:42.727881  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:42.811421  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:42.801418    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.803078    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.804330    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.805386    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.807056    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:42.801418    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.803078    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.804330    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.805386    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.807056    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:42.811446  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:42.811460  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:42.838410  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:42.838445  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:45.369084  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:45.380279  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:45.380388  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:45.405587  293728 cri.go:89] found id: ""
	I1206 10:09:45.405612  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.405621  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:45.405628  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:45.405688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:45.433060  293728 cri.go:89] found id: ""
	I1206 10:09:45.433088  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.433097  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:45.433103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:45.433164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:45.460740  293728 cri.go:89] found id: ""
	I1206 10:09:45.460763  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.460772  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:45.460778  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:45.460837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:45.497706  293728 cri.go:89] found id: ""
	I1206 10:09:45.497771  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.497793  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:45.497813  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:45.497904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:45.534656  293728 cri.go:89] found id: ""
	I1206 10:09:45.534681  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.534690  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:45.534696  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:45.534770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:45.564269  293728 cri.go:89] found id: ""
	I1206 10:09:45.564350  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.564372  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:45.564387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:45.564474  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:45.588438  293728 cri.go:89] found id: ""
	I1206 10:09:45.588517  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.588539  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:45.588558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:45.588651  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:45.613920  293728 cri.go:89] found id: ""
	I1206 10:09:45.613951  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.613960  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:45.613970  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:45.613980  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:45.641788  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:45.641863  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:45.699089  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:45.699123  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:45.712662  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:45.712734  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:45.793739  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:45.785473    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.786020    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.787671    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.788175    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.789766    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:45.785473    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.786020    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.787671    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.788175    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.789766    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:45.793759  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:45.793773  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:48.320858  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:48.331937  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:48.332070  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:48.356716  293728 cri.go:89] found id: ""
	I1206 10:09:48.356784  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.356798  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:48.356806  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:48.356866  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:48.382138  293728 cri.go:89] found id: ""
	I1206 10:09:48.382172  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.382181  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:48.382188  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:48.382258  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:48.408214  293728 cri.go:89] found id: ""
	I1206 10:09:48.408238  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.408247  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:48.408253  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:48.408313  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:48.433328  293728 cri.go:89] found id: ""
	I1206 10:09:48.433351  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.433360  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:48.433366  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:48.433428  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:48.460263  293728 cri.go:89] found id: ""
	I1206 10:09:48.460284  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.460292  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:48.460298  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:48.460355  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:48.488344  293728 cri.go:89] found id: ""
	I1206 10:09:48.488373  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.488381  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:48.488388  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:48.488452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:48.521629  293728 cri.go:89] found id: ""
	I1206 10:09:48.521658  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.521666  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:48.521673  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:48.521759  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:48.549255  293728 cri.go:89] found id: ""
	I1206 10:09:48.549321  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.549344  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:48.549365  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:48.549392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:48.609413  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:48.609450  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:48.623661  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:48.623688  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:48.693637  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:48.684667    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.685431    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687132    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687585    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.689240    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:48.684667    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.685431    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687132    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687585    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.689240    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:48.693661  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:48.693674  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:48.719587  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:48.719660  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:51.258260  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:51.268785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:51.268856  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:51.295768  293728 cri.go:89] found id: ""
	I1206 10:09:51.295793  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.295801  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:51.295808  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:51.295879  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:51.321853  293728 cri.go:89] found id: ""
	I1206 10:09:51.321886  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.321894  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:51.321900  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:51.321968  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:51.347472  293728 cri.go:89] found id: ""
	I1206 10:09:51.347494  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.347502  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:51.347517  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:51.347575  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:51.371656  293728 cri.go:89] found id: ""
	I1206 10:09:51.371683  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.371692  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:51.371698  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:51.371758  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:51.397262  293728 cri.go:89] found id: ""
	I1206 10:09:51.397289  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.397298  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:51.397305  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:51.397409  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:51.423015  293728 cri.go:89] found id: ""
	I1206 10:09:51.423045  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.423061  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:51.423076  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:51.423149  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:51.454355  293728 cri.go:89] found id: ""
	I1206 10:09:51.454381  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.454390  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:51.454396  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:51.454463  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:51.486768  293728 cri.go:89] found id: ""
	I1206 10:09:51.486808  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.486823  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:51.486832  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:51.486843  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:51.554153  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:51.554192  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:51.568560  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:51.568590  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:51.634642  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:51.626552    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.627100    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.628640    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.629103    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.630610    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:51.626552    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.627100    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.628640    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.629103    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.630610    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:51.634664  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:51.634678  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:51.660429  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:51.660463  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:54.188738  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:54.201905  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:54.201981  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:54.227986  293728 cri.go:89] found id: ""
	I1206 10:09:54.228012  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.228021  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:54.228028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:54.228113  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:54.254201  293728 cri.go:89] found id: ""
	I1206 10:09:54.254235  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.254245  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:54.254283  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:54.254395  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:54.278782  293728 cri.go:89] found id: ""
	I1206 10:09:54.278820  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.278830  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:54.278852  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:54.278935  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:54.303206  293728 cri.go:89] found id: ""
	I1206 10:09:54.303240  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.303249  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:54.303256  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:54.303323  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:54.328700  293728 cri.go:89] found id: ""
	I1206 10:09:54.328726  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.328735  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:54.328741  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:54.328818  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:54.352531  293728 cri.go:89] found id: ""
	I1206 10:09:54.352613  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.352638  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:54.352656  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:54.352746  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:54.381751  293728 cri.go:89] found id: ""
	I1206 10:09:54.381785  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.381795  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:54.381802  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:54.381873  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:54.410917  293728 cri.go:89] found id: ""
	I1206 10:09:54.410993  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.411015  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:54.411037  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:54.411076  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:54.440257  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:54.440285  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:54.500235  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:54.500278  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:54.515938  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:54.515966  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:54.588801  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:54.579599    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.580550    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582125    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582602    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.584281    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:54.579599    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.580550    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582125    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582602    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.584281    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:54.588823  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:54.588836  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:57.116312  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:57.127033  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:57.127111  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:57.152251  293728 cri.go:89] found id: ""
	I1206 10:09:57.152273  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.152282  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:57.152288  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:57.152346  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:57.176684  293728 cri.go:89] found id: ""
	I1206 10:09:57.176758  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.176773  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:57.176781  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:57.176840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:57.202374  293728 cri.go:89] found id: ""
	I1206 10:09:57.202436  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.202470  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:57.202494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:57.202580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:57.227547  293728 cri.go:89] found id: ""
	I1206 10:09:57.227573  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.227582  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:57.227589  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:57.227650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:57.253673  293728 cri.go:89] found id: ""
	I1206 10:09:57.253705  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.253714  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:57.253721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:57.253789  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:57.278618  293728 cri.go:89] found id: ""
	I1206 10:09:57.278644  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.278654  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:57.278660  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:57.278722  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:57.304336  293728 cri.go:89] found id: ""
	I1206 10:09:57.304384  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.304397  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:57.304423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:57.304508  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:57.334469  293728 cri.go:89] found id: ""
	I1206 10:09:57.334492  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.334500  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:57.334508  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:57.334520  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:57.348891  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:57.348922  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:57.415906  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:57.407558    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.408081    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.409719    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.410287    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.411964    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:57.407558    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.408081    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.409719    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.410287    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.411964    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:57.415927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:57.415939  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:57.441880  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:57.441918  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:57.475269  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:57.475297  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:00.036981  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:00.091003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:00.091183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:00.199598  293728 cri.go:89] found id: ""
	I1206 10:10:00.199642  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.199652  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:00.199660  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:00.199761  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:00.291513  293728 cri.go:89] found id: ""
	I1206 10:10:00.291550  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.291562  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:00.291569  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:00.291653  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:00.363428  293728 cri.go:89] found id: ""
	I1206 10:10:00.363514  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.363541  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:00.363559  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:00.363706  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:00.471969  293728 cri.go:89] found id: ""
	I1206 10:10:00.471994  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.472004  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:00.472013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:00.472080  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:00.548937  293728 cri.go:89] found id: ""
	I1206 10:10:00.548960  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.548969  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:00.548976  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:00.549039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:00.612750  293728 cri.go:89] found id: ""
	I1206 10:10:00.612774  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.612783  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:00.612790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:00.612857  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:00.648024  293728 cri.go:89] found id: ""
	I1206 10:10:00.648051  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.648061  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:00.648068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:00.648145  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:00.678506  293728 cri.go:89] found id: ""
	I1206 10:10:00.678587  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.678615  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:00.678636  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:00.678671  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:00.755139  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:00.755237  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:00.771588  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:00.771629  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:00.849622  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:00.840203    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.840934    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.842739    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.843443    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.845027    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:00.840203    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.840934    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.842739    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.843443    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.845027    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:00.849656  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:00.849669  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:00.876546  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:00.876583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:03.409148  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:03.420472  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:03.420547  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:03.449464  293728 cri.go:89] found id: ""
	I1206 10:10:03.449487  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.449496  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:03.449521  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:03.449598  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:03.482241  293728 cri.go:89] found id: ""
	I1206 10:10:03.482267  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.482276  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:03.482286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:03.482349  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:03.512048  293728 cri.go:89] found id: ""
	I1206 10:10:03.512075  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.512084  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:03.512090  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:03.512153  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:03.544039  293728 cri.go:89] found id: ""
	I1206 10:10:03.544064  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.544073  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:03.544080  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:03.544159  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:03.568866  293728 cri.go:89] found id: ""
	I1206 10:10:03.568942  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.568966  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:03.568978  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:03.569071  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:03.595896  293728 cri.go:89] found id: ""
	I1206 10:10:03.595930  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.595940  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:03.595946  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:03.596020  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:03.620834  293728 cri.go:89] found id: ""
	I1206 10:10:03.620863  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.620871  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:03.620878  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:03.620950  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:03.644327  293728 cri.go:89] found id: ""
	I1206 10:10:03.644359  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.644368  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:03.644377  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:03.644392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:03.707856  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:03.699517    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.700161    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.701732    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.702251    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.703903    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:03.699517    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.700161    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.701732    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.702251    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.703903    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:03.707879  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:03.707891  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:03.735529  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:03.735562  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:03.767489  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:03.767516  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:03.831889  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:03.831926  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:06.346582  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:06.357845  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:06.357929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:06.387151  293728 cri.go:89] found id: ""
	I1206 10:10:06.387176  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.387185  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:06.387192  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:06.387256  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:06.413165  293728 cri.go:89] found id: ""
	I1206 10:10:06.413194  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.413203  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:06.413210  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:06.413271  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:06.437677  293728 cri.go:89] found id: ""
	I1206 10:10:06.437701  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.437710  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:06.437716  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:06.437772  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:06.463040  293728 cri.go:89] found id: ""
	I1206 10:10:06.463070  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.463080  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:06.463087  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:06.463150  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:06.494675  293728 cri.go:89] found id: ""
	I1206 10:10:06.494751  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.494774  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:06.494794  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:06.494889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:06.526246  293728 cri.go:89] found id: ""
	I1206 10:10:06.526316  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.526337  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:06.526357  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:06.526440  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:06.559804  293728 cri.go:89] found id: ""
	I1206 10:10:06.559829  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.559839  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:06.559845  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:06.559907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:06.589855  293728 cri.go:89] found id: ""
	I1206 10:10:06.589930  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.589964  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:06.590003  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:06.590032  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:06.616596  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:06.616632  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:06.646994  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:06.647021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:06.702957  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:06.702993  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:06.716751  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:06.716778  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:06.798071  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:06.789752    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.790292    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.791805    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.792344    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.793982    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:06.789752    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.790292    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.791805    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.792344    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.793982    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:09.298347  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:09.308960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:09.309035  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:09.333650  293728 cri.go:89] found id: ""
	I1206 10:10:09.333675  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.333683  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:09.333690  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:09.333767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:09.357861  293728 cri.go:89] found id: ""
	I1206 10:10:09.357885  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.357894  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:09.357900  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:09.358010  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:09.382744  293728 cri.go:89] found id: ""
	I1206 10:10:09.382770  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.382779  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:09.382785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:09.382878  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:09.413180  293728 cri.go:89] found id: ""
	I1206 10:10:09.413259  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.413282  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:09.413295  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:09.413376  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:09.438201  293728 cri.go:89] found id: ""
	I1206 10:10:09.438227  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.438235  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:09.438242  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:09.438300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:09.462981  293728 cri.go:89] found id: ""
	I1206 10:10:09.463058  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.463084  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:09.463103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:09.463199  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:09.489818  293728 cri.go:89] found id: ""
	I1206 10:10:09.489840  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.489849  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:09.489855  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:09.489914  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:09.517662  293728 cri.go:89] found id: ""
	I1206 10:10:09.517689  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.517698  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:09.517707  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:09.517719  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:09.576466  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:09.576502  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:09.590374  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:09.590401  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:09.655862  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:09.646406    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.646998    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.648878    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.649656    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.651513    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:09.646406    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.646998    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.648878    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.649656    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.651513    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:09.655883  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:09.655895  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:09.681441  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:09.681477  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:12.211127  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:12.222215  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:12.222285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:12.247472  293728 cri.go:89] found id: ""
	I1206 10:10:12.247547  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.247562  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:12.247573  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:12.247633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:12.272505  293728 cri.go:89] found id: ""
	I1206 10:10:12.272533  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.272543  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:12.272550  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:12.272638  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:12.297673  293728 cri.go:89] found id: ""
	I1206 10:10:12.297698  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.297707  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:12.297715  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:12.297830  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:12.322568  293728 cri.go:89] found id: ""
	I1206 10:10:12.322609  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.322618  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:12.322625  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:12.322701  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:12.349304  293728 cri.go:89] found id: ""
	I1206 10:10:12.349331  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.349341  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:12.349347  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:12.349443  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:12.375736  293728 cri.go:89] found id: ""
	I1206 10:10:12.375762  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.375771  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:12.375778  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:12.375840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:12.400942  293728 cri.go:89] found id: ""
	I1206 10:10:12.400966  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.400974  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:12.400981  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:12.401040  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:12.426874  293728 cri.go:89] found id: ""
	I1206 10:10:12.426916  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.426926  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:12.426936  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:12.426948  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:12.484510  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:12.484587  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:12.499107  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:12.499186  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:12.572427  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:12.563920    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.564850    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566425    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566780    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.568265    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:12.563920    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.564850    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566425    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566780    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.568265    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:12.572450  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:12.572466  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:12.598814  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:12.598849  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:15.128638  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:15.139805  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:15.139876  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:15.165109  293728 cri.go:89] found id: ""
	I1206 10:10:15.165133  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.165149  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:15.165156  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:15.165219  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:15.196948  293728 cri.go:89] found id: ""
	I1206 10:10:15.196974  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.196982  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:15.196989  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:15.197059  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:15.222058  293728 cri.go:89] found id: ""
	I1206 10:10:15.222082  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.222090  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:15.222096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:15.222155  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:15.248215  293728 cri.go:89] found id: ""
	I1206 10:10:15.248238  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.248247  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:15.248254  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:15.248312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:15.273082  293728 cri.go:89] found id: ""
	I1206 10:10:15.273104  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.273113  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:15.273120  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:15.273179  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:15.298006  293728 cri.go:89] found id: ""
	I1206 10:10:15.298029  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.298037  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:15.298043  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:15.298101  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:15.322519  293728 cri.go:89] found id: ""
	I1206 10:10:15.322542  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.322550  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:15.322557  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:15.322615  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:15.347746  293728 cri.go:89] found id: ""
	I1206 10:10:15.347770  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.347778  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:15.347786  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:15.347797  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:15.361534  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:15.361561  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:15.427348  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:15.418245    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.419137    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421066    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421690    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.423366    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:15.418245    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.419137    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421066    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421690    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.423366    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:15.427370  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:15.427404  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:15.453826  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:15.453864  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:15.487015  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:15.487049  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:18.053317  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:18.064493  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:18.064566  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:18.089748  293728 cri.go:89] found id: ""
	I1206 10:10:18.089773  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.089782  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:18.089789  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:18.089850  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:18.116011  293728 cri.go:89] found id: ""
	I1206 10:10:18.116039  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.116048  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:18.116055  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:18.116116  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:18.146676  293728 cri.go:89] found id: ""
	I1206 10:10:18.146701  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.146710  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:18.146716  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:18.146783  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:18.172596  293728 cri.go:89] found id: ""
	I1206 10:10:18.172621  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.172631  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:18.172643  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:18.172703  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:18.198506  293728 cri.go:89] found id: ""
	I1206 10:10:18.198584  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.198608  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:18.198630  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:18.198747  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:18.230708  293728 cri.go:89] found id: ""
	I1206 10:10:18.230786  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.230812  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:18.230830  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:18.230955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:18.257169  293728 cri.go:89] found id: ""
	I1206 10:10:18.257235  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.257250  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:18.257257  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:18.257317  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:18.285950  293728 cri.go:89] found id: ""
	I1206 10:10:18.285976  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.285985  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:18.285994  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:18.286006  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:18.318446  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:18.318471  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:18.379191  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:18.379227  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:18.393268  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:18.393295  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:18.458997  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:18.449882    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.450796    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452543    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452857    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.454349    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:18.449882    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.450796    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452543    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452857    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.454349    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:18.459023  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:18.459035  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:20.987221  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:20.999561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:20.999633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:21.037750  293728 cri.go:89] found id: ""
	I1206 10:10:21.037771  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.037780  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:21.037786  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:21.037846  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:21.063327  293728 cri.go:89] found id: ""
	I1206 10:10:21.063350  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.063358  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:21.063364  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:21.063448  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:21.088200  293728 cri.go:89] found id: ""
	I1206 10:10:21.088223  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.088231  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:21.088237  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:21.088298  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:21.118025  293728 cri.go:89] found id: ""
	I1206 10:10:21.118051  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.118061  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:21.118068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:21.118126  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:21.143740  293728 cri.go:89] found id: ""
	I1206 10:10:21.143770  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.143779  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:21.143785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:21.143848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:21.169323  293728 cri.go:89] found id: ""
	I1206 10:10:21.169401  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.169417  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:21.169424  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:21.169501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:21.194291  293728 cri.go:89] found id: ""
	I1206 10:10:21.194356  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.194380  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:21.194398  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:21.194490  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:21.219471  293728 cri.go:89] found id: ""
	I1206 10:10:21.219599  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.219653  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:21.219679  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:21.219706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:21.277216  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:21.277252  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:21.291736  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:21.291766  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:21.366215  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:21.357353    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.358173    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.359989    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.360738    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.362264    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:21.357353    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.358173    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.359989    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.360738    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.362264    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:21.366236  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:21.366250  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:21.392405  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:21.392437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:23.923653  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:23.934595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:23.934670  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:23.961107  293728 cri.go:89] found id: ""
	I1206 10:10:23.961130  293728 logs.go:282] 0 containers: []
	W1206 10:10:23.961138  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:23.961145  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:23.961209  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:23.994692  293728 cri.go:89] found id: ""
	I1206 10:10:23.994729  293728 logs.go:282] 0 containers: []
	W1206 10:10:23.994739  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:23.994745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:23.994817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:24.028605  293728 cri.go:89] found id: ""
	I1206 10:10:24.028689  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.028715  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:24.028735  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:24.028848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:24.057290  293728 cri.go:89] found id: ""
	I1206 10:10:24.057317  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.057326  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:24.057333  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:24.057400  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:24.085994  293728 cri.go:89] found id: ""
	I1206 10:10:24.086029  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.086039  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:24.086045  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:24.086128  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:24.112798  293728 cri.go:89] found id: ""
	I1206 10:10:24.112826  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.112835  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:24.112841  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:24.112930  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:24.139149  293728 cri.go:89] found id: ""
	I1206 10:10:24.139175  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.139184  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:24.139190  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:24.139300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:24.165213  293728 cri.go:89] found id: ""
	I1206 10:10:24.165239  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.165248  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:24.165257  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:24.165268  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:24.223441  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:24.223477  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:24.237256  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:24.237282  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:24.303131  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:24.295355    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.295806    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297324    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297646    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.299178    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:24.295355    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.295806    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297324    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297646    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.299178    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:24.303154  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:24.303170  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:24.329120  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:24.329160  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:26.857977  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:26.868844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:26.868920  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:26.893530  293728 cri.go:89] found id: ""
	I1206 10:10:26.893555  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.893563  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:26.893569  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:26.893628  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:26.922692  293728 cri.go:89] found id: ""
	I1206 10:10:26.922718  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.922727  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:26.922733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:26.922794  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:26.948535  293728 cri.go:89] found id: ""
	I1206 10:10:26.948560  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.948569  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:26.948575  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:26.948640  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:26.976097  293728 cri.go:89] found id: ""
	I1206 10:10:26.976167  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.976193  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:26.976212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:26.976300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:27.010083  293728 cri.go:89] found id: ""
	I1206 10:10:27.010161  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.010184  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:27.010229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:27.010333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:27.038839  293728 cri.go:89] found id: ""
	I1206 10:10:27.038913  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.038934  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:27.038954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:27.039084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:27.066982  293728 cri.go:89] found id: ""
	I1206 10:10:27.067063  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.067086  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:27.067105  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:27.067216  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:27.092863  293728 cri.go:89] found id: ""
	I1206 10:10:27.092891  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.092899  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:27.092909  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:27.092950  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:27.120341  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:27.120375  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:27.177452  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:27.177489  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:27.191505  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:27.191533  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:27.260108  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:27.251592    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.252285    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.253999    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.254325    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.255968    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:27.251592    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.252285    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.253999    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.254325    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.255968    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:27.260129  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:27.260141  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:29.785293  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:29.795873  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:29.795947  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:29.826896  293728 cri.go:89] found id: ""
	I1206 10:10:29.826934  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.826944  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:29.826950  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:29.827093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:29.857768  293728 cri.go:89] found id: ""
	I1206 10:10:29.857793  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.857803  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:29.857809  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:29.857881  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:29.885651  293728 cri.go:89] found id: ""
	I1206 10:10:29.885686  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.885696  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:29.885721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:29.885805  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:29.910764  293728 cri.go:89] found id: ""
	I1206 10:10:29.910892  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.910916  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:29.910928  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:29.911014  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:29.937166  293728 cri.go:89] found id: ""
	I1206 10:10:29.937191  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.937201  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:29.937208  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:29.937270  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:29.962684  293728 cri.go:89] found id: ""
	I1206 10:10:29.962717  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.962726  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:29.962733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:29.962799  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:29.993702  293728 cri.go:89] found id: ""
	I1206 10:10:29.993776  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.993799  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:29.993818  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:29.993904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:30.061338  293728 cri.go:89] found id: ""
	I1206 10:10:30.061423  293728 logs.go:282] 0 containers: []
	W1206 10:10:30.061447  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:30.061482  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:30.061514  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:30.110307  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:30.110344  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:30.178825  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:30.178864  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:30.194614  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:30.194641  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:30.269484  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:30.258437    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.259022    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.261951    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263145    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263843    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:30.258437    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.259022    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.261951    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263145    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263843    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:30.269507  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:30.269521  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:32.796483  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:32.807219  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:32.807347  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:32.832338  293728 cri.go:89] found id: ""
	I1206 10:10:32.832365  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.832374  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:32.832381  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:32.832443  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:32.857737  293728 cri.go:89] found id: ""
	I1206 10:10:32.857763  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.857771  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:32.857780  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:32.857840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:32.886514  293728 cri.go:89] found id: ""
	I1206 10:10:32.886537  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.886546  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:32.886553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:32.886622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:32.916133  293728 cri.go:89] found id: ""
	I1206 10:10:32.916157  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.916166  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:32.916172  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:32.916278  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:32.940460  293728 cri.go:89] found id: ""
	I1206 10:10:32.940485  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.940493  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:32.940500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:32.940580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:32.967101  293728 cri.go:89] found id: ""
	I1206 10:10:32.967129  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.967139  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:32.967146  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:32.967255  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:33.003657  293728 cri.go:89] found id: ""
	I1206 10:10:33.003687  293728 logs.go:282] 0 containers: []
	W1206 10:10:33.003696  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:33.003703  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:33.003817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:33.034541  293728 cri.go:89] found id: ""
	I1206 10:10:33.034570  293728 logs.go:282] 0 containers: []
	W1206 10:10:33.034579  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:33.034587  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:33.034599  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:33.103182  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:33.094513    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.095149    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.096956    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.097426    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.099078    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:33.094513    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.095149    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.096956    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.097426    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.099078    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:33.103205  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:33.103219  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:33.129473  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:33.129508  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:33.158555  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:33.158583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:33.216375  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:33.216409  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:35.730137  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:35.743050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:35.743211  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:35.782795  293728 cri.go:89] found id: ""
	I1206 10:10:35.782873  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.782897  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:35.782917  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:35.783049  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:35.810026  293728 cri.go:89] found id: ""
	I1206 10:10:35.810102  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.810126  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:35.810144  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:35.810234  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:35.835162  293728 cri.go:89] found id: ""
	I1206 10:10:35.835240  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.835265  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:35.835286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:35.835412  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:35.860195  293728 cri.go:89] found id: ""
	I1206 10:10:35.860227  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.860236  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:35.860247  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:35.860386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:35.886939  293728 cri.go:89] found id: ""
	I1206 10:10:35.886977  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.886995  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:35.887003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:35.887093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:35.917822  293728 cri.go:89] found id: ""
	I1206 10:10:35.917848  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.917858  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:35.917864  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:35.917944  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:35.945452  293728 cri.go:89] found id: ""
	I1206 10:10:35.945478  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.945488  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:35.945494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:35.945556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:35.986146  293728 cri.go:89] found id: ""
	I1206 10:10:35.986174  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.986183  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:35.986193  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:35.986204  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:36.053722  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:36.053759  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:36.068786  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:36.068815  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:36.132981  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:36.124259    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.124911    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.126650    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.127348    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.128990    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:36.124259    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.124911    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.126650    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.127348    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.128990    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:36.133005  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:36.133018  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:36.158971  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:36.159009  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:38.688989  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:38.699954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:38.700025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:38.732646  293728 cri.go:89] found id: ""
	I1206 10:10:38.732680  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.732689  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:38.732696  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:38.732757  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:38.760849  293728 cri.go:89] found id: ""
	I1206 10:10:38.760878  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.760888  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:38.760894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:38.760952  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:38.793233  293728 cri.go:89] found id: ""
	I1206 10:10:38.793258  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.793267  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:38.793274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:38.793355  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:38.818786  293728 cri.go:89] found id: ""
	I1206 10:10:38.818814  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.818823  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:38.818831  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:38.818925  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:38.845346  293728 cri.go:89] found id: ""
	I1206 10:10:38.845373  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.845382  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:38.845388  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:38.845449  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:38.876064  293728 cri.go:89] found id: ""
	I1206 10:10:38.876088  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.876097  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:38.876103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:38.876193  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:38.901010  293728 cri.go:89] found id: ""
	I1206 10:10:38.901037  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.901046  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:38.901053  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:38.901121  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:38.931159  293728 cri.go:89] found id: ""
	I1206 10:10:38.931185  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.931194  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:38.931203  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:38.931214  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:38.945219  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:38.945247  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:39.040279  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:39.031608    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.032449    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034282    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034607    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.036094    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:39.031608    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.032449    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034282    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034607    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.036094    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:39.040303  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:39.040315  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:39.069669  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:39.069709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:39.102102  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:39.102133  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:41.662114  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:41.674379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:41.674461  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:41.700812  293728 cri.go:89] found id: ""
	I1206 10:10:41.700836  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.700846  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:41.700852  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:41.700945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:41.732717  293728 cri.go:89] found id: ""
	I1206 10:10:41.732744  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.732753  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:41.732759  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:41.732818  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:41.765582  293728 cri.go:89] found id: ""
	I1206 10:10:41.765609  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.765618  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:41.765624  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:41.765684  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:41.795133  293728 cri.go:89] found id: ""
	I1206 10:10:41.795160  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.795169  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:41.795178  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:41.795240  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:41.824848  293728 cri.go:89] found id: ""
	I1206 10:10:41.824876  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.824885  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:41.824894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:41.825002  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:41.850710  293728 cri.go:89] found id: ""
	I1206 10:10:41.850738  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.850748  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:41.850754  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:41.850817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:41.876689  293728 cri.go:89] found id: ""
	I1206 10:10:41.876714  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.876723  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:41.876730  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:41.876837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:41.910933  293728 cri.go:89] found id: ""
	I1206 10:10:41.910958  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.910967  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:41.910977  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:41.910988  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:41.940383  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:41.940411  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:42.002369  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:42.002465  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:42.036193  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:42.036220  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:42.116431  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:42.104500    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.106090    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.107160    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.108051    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.110987    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:42.104500    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.106090    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.107160    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.108051    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.110987    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:42.116466  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:42.116485  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:44.645750  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:44.657010  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:44.657087  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:44.681487  293728 cri.go:89] found id: ""
	I1206 10:10:44.681511  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.681520  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:44.681526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:44.681632  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:44.707007  293728 cri.go:89] found id: ""
	I1206 10:10:44.707032  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.707059  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:44.707065  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:44.707124  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:44.740358  293728 cri.go:89] found id: ""
	I1206 10:10:44.740384  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.740394  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:44.740400  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:44.740462  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:44.774979  293728 cri.go:89] found id: ""
	I1206 10:10:44.775005  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.775013  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:44.775020  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:44.775099  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:44.802733  293728 cri.go:89] found id: ""
	I1206 10:10:44.802759  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.802768  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:44.802774  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:44.802836  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:44.830059  293728 cri.go:89] found id: ""
	I1206 10:10:44.830082  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.830091  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:44.830104  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:44.830164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:44.857962  293728 cri.go:89] found id: ""
	I1206 10:10:44.857988  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.857997  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:44.858003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:44.858062  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:44.882971  293728 cri.go:89] found id: ""
	I1206 10:10:44.882993  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.883002  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:44.883011  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:44.883021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:44.939214  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:44.939249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:44.953046  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:44.953074  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:45.078537  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:45.068034    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069216    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069914    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.072098    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.073533    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:45.068034    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069216    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069914    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.072098    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.073533    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:45.078570  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:45.078586  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:45.108352  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:45.108392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:47.660188  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:47.670914  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:47.670992  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:47.695337  293728 cri.go:89] found id: ""
	I1206 10:10:47.695363  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.695417  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:47.695425  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:47.695496  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:47.728763  293728 cri.go:89] found id: ""
	I1206 10:10:47.728834  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.728855  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:47.728877  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:47.728982  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:47.755564  293728 cri.go:89] found id: ""
	I1206 10:10:47.755640  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.755663  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:47.755683  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:47.755794  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:47.786763  293728 cri.go:89] found id: ""
	I1206 10:10:47.786838  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.786869  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:47.786892  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:47.786999  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:47.813109  293728 cri.go:89] found id: ""
	I1206 10:10:47.813187  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.813209  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:47.813227  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:47.813312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:47.839872  293728 cri.go:89] found id: ""
	I1206 10:10:47.839947  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.839963  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:47.839971  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:47.840029  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:47.864803  293728 cri.go:89] found id: ""
	I1206 10:10:47.864827  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.864835  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:47.864842  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:47.864908  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:47.893715  293728 cri.go:89] found id: ""
	I1206 10:10:47.893740  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.893749  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:47.893759  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:47.893770  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:47.962240  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:47.954010    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.954579    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956159    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956626    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.958129    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:47.954010    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.954579    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956159    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956626    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.958129    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:47.962263  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:47.962275  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:47.988774  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:47.988808  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:48.022271  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:48.022301  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:48.088564  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:48.088601  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:50.605005  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:50.615765  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:50.615847  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:50.641365  293728 cri.go:89] found id: ""
	I1206 10:10:50.641389  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.641397  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:50.641404  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:50.641468  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:50.665749  293728 cri.go:89] found id: ""
	I1206 10:10:50.665775  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.665784  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:50.665790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:50.665848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:50.693092  293728 cri.go:89] found id: ""
	I1206 10:10:50.693117  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.693133  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:50.693139  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:50.693198  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:50.721292  293728 cri.go:89] found id: ""
	I1206 10:10:50.721319  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.721328  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:50.721335  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:50.721394  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:50.757580  293728 cri.go:89] found id: ""
	I1206 10:10:50.757608  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.757617  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:50.757623  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:50.757681  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:50.795246  293728 cri.go:89] found id: ""
	I1206 10:10:50.795275  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.795284  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:50.795290  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:50.795352  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:50.831466  293728 cri.go:89] found id: ""
	I1206 10:10:50.831489  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.831497  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:50.831503  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:50.831563  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:50.856692  293728 cri.go:89] found id: ""
	I1206 10:10:50.856719  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.856728  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:50.856737  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:50.856748  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:50.914369  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:50.914404  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:50.928218  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:50.928249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:51.001552  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:50.990416    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.991460    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.992543    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.993284    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.996113    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:50.990416    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.991460    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.992543    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.993284    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.996113    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:51.001649  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:51.001679  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:51.035670  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:51.035706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:53.568268  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:53.579523  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:53.579600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:53.605604  293728 cri.go:89] found id: ""
	I1206 10:10:53.605626  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.605636  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:53.605642  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:53.605704  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:53.632535  293728 cri.go:89] found id: ""
	I1206 10:10:53.632558  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.632566  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:53.632573  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:53.632633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:53.664459  293728 cri.go:89] found id: ""
	I1206 10:10:53.664485  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.664494  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:53.664500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:53.664561  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:53.689200  293728 cri.go:89] found id: ""
	I1206 10:10:53.689227  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.689235  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:53.689242  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:53.689303  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:53.724364  293728 cri.go:89] found id: ""
	I1206 10:10:53.724391  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.724401  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:53.724408  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:53.724489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:53.760957  293728 cri.go:89] found id: ""
	I1206 10:10:53.760985  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.760995  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:53.761002  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:53.761065  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:53.795256  293728 cri.go:89] found id: ""
	I1206 10:10:53.795417  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.795469  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:53.795490  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:53.795618  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:53.820946  293728 cri.go:89] found id: ""
	I1206 10:10:53.821014  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.821028  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:53.821038  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:53.821049  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:53.850603  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:53.850632  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:53.910568  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:53.910606  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:53.924408  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:53.924435  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:53.993865  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:53.984800    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.985669    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987623    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987938    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.989469    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:53.984800    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.985669    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987623    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987938    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.989469    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:53.993926  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:53.993964  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:56.525953  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:56.537170  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:56.537251  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:56.562800  293728 cri.go:89] found id: ""
	I1206 10:10:56.562825  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.562834  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:56.562841  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:56.562903  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:56.589000  293728 cri.go:89] found id: ""
	I1206 10:10:56.589032  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.589042  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:56.589048  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:56.589108  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:56.613252  293728 cri.go:89] found id: ""
	I1206 10:10:56.613276  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.613284  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:56.613291  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:56.613354  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:56.643136  293728 cri.go:89] found id: ""
	I1206 10:10:56.643176  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.643186  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:56.643193  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:56.643265  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:56.669515  293728 cri.go:89] found id: ""
	I1206 10:10:56.669539  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.669547  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:56.669554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:56.669613  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:56.694989  293728 cri.go:89] found id: ""
	I1206 10:10:56.695013  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.695022  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:56.695028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:56.695295  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:56.733872  293728 cri.go:89] found id: ""
	I1206 10:10:56.733898  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.733907  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:56.733914  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:56.733981  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:56.768700  293728 cri.go:89] found id: ""
	I1206 10:10:56.768725  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.768734  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:56.768745  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:56.768765  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:56.801786  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:56.801812  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:56.857425  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:56.857458  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:56.870898  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:56.870929  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:56.939737  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:56.930826    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.931761    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933321    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933912    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.935699    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:56.930826    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.931761    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933321    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933912    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.935699    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:56.939814  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:56.939833  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:59.467303  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:59.479788  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:59.479913  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:59.507178  293728 cri.go:89] found id: ""
	I1206 10:10:59.507214  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.507223  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:59.507229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:59.507307  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:59.532362  293728 cri.go:89] found id: ""
	I1206 10:10:59.532435  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.532460  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:59.532478  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:59.532565  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:59.561793  293728 cri.go:89] found id: ""
	I1206 10:10:59.561869  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.561893  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:59.561912  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:59.562006  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:59.587885  293728 cri.go:89] found id: ""
	I1206 10:10:59.587914  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.587933  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:59.587955  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:59.588043  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:59.616632  293728 cri.go:89] found id: ""
	I1206 10:10:59.616701  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.616723  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:59.616741  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:59.616828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:59.641907  293728 cri.go:89] found id: ""
	I1206 10:10:59.641942  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.641950  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:59.641957  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:59.642030  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:59.666146  293728 cri.go:89] found id: ""
	I1206 10:10:59.666181  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.666190  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:59.666197  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:59.666267  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:59.690454  293728 cri.go:89] found id: ""
	I1206 10:10:59.690525  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.690549  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:59.690571  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:59.690606  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:59.747565  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:59.747602  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:59.761979  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:59.762033  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:59.832718  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:59.824094    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825243    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825921    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827020    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827705    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:59.824094    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825243    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825921    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827020    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827705    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:59.832743  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:59.832755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:59.858330  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:59.858360  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:02.390395  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:02.401485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:02.401558  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:02.427611  293728 cri.go:89] found id: ""
	I1206 10:11:02.427638  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.427647  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:02.427654  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:02.427729  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:02.454049  293728 cri.go:89] found id: ""
	I1206 10:11:02.454078  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.454087  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:02.454093  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:02.454154  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:02.480392  293728 cri.go:89] found id: ""
	I1206 10:11:02.480417  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.480425  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:02.480431  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:02.480489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:02.506546  293728 cri.go:89] found id: ""
	I1206 10:11:02.506572  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.506581  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:02.506587  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:02.506647  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:02.531917  293728 cri.go:89] found id: ""
	I1206 10:11:02.531954  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.531963  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:02.531979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:02.532097  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:02.559738  293728 cri.go:89] found id: ""
	I1206 10:11:02.559759  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.559768  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:02.559774  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:02.559834  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:02.584556  293728 cri.go:89] found id: ""
	I1206 10:11:02.584578  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.584587  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:02.584593  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:02.584652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:02.617108  293728 cri.go:89] found id: ""
	I1206 10:11:02.617164  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.617174  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:02.617183  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:02.617199  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:02.645764  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:02.645802  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:02.675285  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:02.675317  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:02.733222  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:02.733262  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:02.747026  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:02.747069  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:02.827017  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:02.817993    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.818819    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.820650    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.821248    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.822937    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:02.817993    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.818819    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.820650    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.821248    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.822937    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:05.327889  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:05.338718  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:05.338812  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:05.363857  293728 cri.go:89] found id: ""
	I1206 10:11:05.363882  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.363892  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:05.363899  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:05.363969  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:05.389419  293728 cri.go:89] found id: ""
	I1206 10:11:05.389444  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.389453  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:05.389462  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:05.389522  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:05.416875  293728 cri.go:89] found id: ""
	I1206 10:11:05.416937  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.416952  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:05.416960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:05.417018  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:05.445294  293728 cri.go:89] found id: ""
	I1206 10:11:05.445316  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.445325  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:05.445331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:05.445389  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:05.469930  293728 cri.go:89] found id: ""
	I1206 10:11:05.469952  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.469960  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:05.469966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:05.470023  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:05.494527  293728 cri.go:89] found id: ""
	I1206 10:11:05.494591  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.494623  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:05.494641  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:05.494712  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:05.519703  293728 cri.go:89] found id: ""
	I1206 10:11:05.519727  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.519736  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:05.519742  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:05.519802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:05.544697  293728 cri.go:89] found id: ""
	I1206 10:11:05.544721  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.544729  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:05.544738  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:05.544751  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:05.558261  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:05.558288  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:05.627696  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:05.618572   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.619577   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.621405   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.622011   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.623059   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:05.618572   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.619577   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.621405   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.622011   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.623059   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:05.627760  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:05.627781  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:05.653464  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:05.653499  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:05.684619  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:05.684647  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:08.247509  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:08.260609  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:08.260730  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:08.289483  293728 cri.go:89] found id: ""
	I1206 10:11:08.289551  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.289567  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:08.289580  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:08.289640  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:08.318013  293728 cri.go:89] found id: ""
	I1206 10:11:08.318037  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.318045  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:08.318051  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:08.318110  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:08.351762  293728 cri.go:89] found id: ""
	I1206 10:11:08.351785  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.351794  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:08.351800  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:08.351858  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:08.377083  293728 cri.go:89] found id: ""
	I1206 10:11:08.377159  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.377174  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:08.377181  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:08.377240  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:08.406041  293728 cri.go:89] found id: ""
	I1206 10:11:08.406063  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.406072  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:08.406077  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:08.406135  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:08.430970  293728 cri.go:89] found id: ""
	I1206 10:11:08.430996  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.431004  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:08.431011  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:08.431096  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:08.454833  293728 cri.go:89] found id: ""
	I1206 10:11:08.454857  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.454865  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:08.454872  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:08.454931  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:08.484046  293728 cri.go:89] found id: ""
	I1206 10:11:08.484113  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.484129  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:08.484139  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:08.484150  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:08.551224  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:08.542554   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.543265   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545049   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545727   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.547350   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:08.542554   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.543265   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545049   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545727   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.547350   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:08.551247  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:08.551259  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:08.577706  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:08.577740  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:08.605435  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:08.605462  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:08.665984  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:08.666020  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:11.180758  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:11.193428  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:11.193501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:11.230343  293728 cri.go:89] found id: ""
	I1206 10:11:11.230374  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.230383  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:11.230389  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:11.230452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:11.267153  293728 cri.go:89] found id: ""
	I1206 10:11:11.267177  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.267187  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:11.267193  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:11.267258  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:11.299679  293728 cri.go:89] found id: ""
	I1206 10:11:11.299708  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.299718  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:11.299724  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:11.299784  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:11.325476  293728 cri.go:89] found id: ""
	I1206 10:11:11.325503  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.325512  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:11.325518  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:11.325600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:11.351586  293728 cri.go:89] found id: ""
	I1206 10:11:11.351614  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.351624  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:11.351632  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:11.351700  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:11.377176  293728 cri.go:89] found id: ""
	I1206 10:11:11.377203  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.377212  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:11.377219  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:11.377308  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:11.402618  293728 cri.go:89] found id: ""
	I1206 10:11:11.402644  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.402652  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:11.402659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:11.402745  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:11.429503  293728 cri.go:89] found id: ""
	I1206 10:11:11.429529  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.429538  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:11.429547  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:11.429562  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:11.486599  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:11.486638  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:11.500957  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:11.500987  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:11.577987  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:11.568882   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.569760   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.571647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.572318   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.573801   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:11.568882   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.569760   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.571647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.572318   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.573801   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:11.578008  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:11.578021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:11.604993  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:11.605027  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:14.137875  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:14.148737  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:14.148811  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:14.173594  293728 cri.go:89] found id: ""
	I1206 10:11:14.173671  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.173695  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:14.173714  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:14.173809  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:14.200007  293728 cri.go:89] found id: ""
	I1206 10:11:14.200033  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.200043  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:14.200050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:14.200117  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:14.233924  293728 cri.go:89] found id: ""
	I1206 10:11:14.233951  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.233959  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:14.233966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:14.234030  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:14.264436  293728 cri.go:89] found id: ""
	I1206 10:11:14.264464  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.264474  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:14.264480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:14.264540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:14.292320  293728 cri.go:89] found id: ""
	I1206 10:11:14.292348  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.292359  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:14.292365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:14.292426  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:14.317612  293728 cri.go:89] found id: ""
	I1206 10:11:14.317640  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.317649  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:14.317656  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:14.317714  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:14.342496  293728 cri.go:89] found id: ""
	I1206 10:11:14.342521  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.342530  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:14.342536  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:14.342596  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:14.368247  293728 cri.go:89] found id: ""
	I1206 10:11:14.368273  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.368282  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:14.368292  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:14.368304  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:14.394942  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:14.394976  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:14.428315  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:14.428345  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:14.484824  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:14.484855  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:14.498675  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:14.498705  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:14.568051  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:14.559253   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.560001   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.561736   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.562345   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.564094   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:14.559253   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.560001   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.561736   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.562345   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.564094   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:17.068293  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:17.078902  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:17.078976  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:17.103674  293728 cri.go:89] found id: ""
	I1206 10:11:17.103699  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.103708  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:17.103715  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:17.103777  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:17.139412  293728 cri.go:89] found id: ""
	I1206 10:11:17.139481  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.139503  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:17.139523  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:17.139610  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:17.168435  293728 cri.go:89] found id: ""
	I1206 10:11:17.168461  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.168470  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:17.168476  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:17.168568  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:17.198788  293728 cri.go:89] found id: ""
	I1206 10:11:17.198854  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.198879  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:17.198898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:17.198983  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:17.233132  293728 cri.go:89] found id: ""
	I1206 10:11:17.233218  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.233242  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:17.233262  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:17.233356  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:17.268547  293728 cri.go:89] found id: ""
	I1206 10:11:17.268613  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.268637  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:17.268655  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:17.268741  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:17.303935  293728 cri.go:89] found id: ""
	I1206 10:11:17.303957  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.303966  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:17.303972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:17.304032  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:17.328050  293728 cri.go:89] found id: ""
	I1206 10:11:17.328074  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.328084  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:17.328092  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:17.328139  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:17.387715  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:17.387750  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:17.401545  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:17.401576  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:17.467905  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:17.459187   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.459639   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461308   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461736   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.463309   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:17.459187   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.459639   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461308   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461736   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.463309   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:17.467927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:17.467939  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:17.493972  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:17.494007  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:20.027522  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:20.040220  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:20.040323  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:20.068566  293728 cri.go:89] found id: ""
	I1206 10:11:20.068592  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.068602  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:20.068610  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:20.068691  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:20.096577  293728 cri.go:89] found id: ""
	I1206 10:11:20.096616  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.096626  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:20.096633  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:20.096791  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:20.125150  293728 cri.go:89] found id: ""
	I1206 10:11:20.125175  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.125185  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:20.125192  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:20.125253  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:20.151199  293728 cri.go:89] found id: ""
	I1206 10:11:20.151225  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.151234  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:20.151241  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:20.151303  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:20.177323  293728 cri.go:89] found id: ""
	I1206 10:11:20.177349  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.177359  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:20.177365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:20.177454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:20.207914  293728 cri.go:89] found id: ""
	I1206 10:11:20.207940  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.207950  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:20.207956  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:20.208015  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:20.250213  293728 cri.go:89] found id: ""
	I1206 10:11:20.250247  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.250256  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:20.250265  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:20.250336  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:20.284320  293728 cri.go:89] found id: ""
	I1206 10:11:20.284356  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.284365  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:20.284374  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:20.284384  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:20.317496  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:20.317524  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:20.373988  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:20.374021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:20.387702  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:20.387728  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:20.454347  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:20.446421   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.447014   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448572   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448979   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.450465   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:20.446421   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.447014   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448572   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448979   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.450465   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:20.454370  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:20.454383  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:22.980202  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:22.991835  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:22.991961  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:23.022299  293728 cri.go:89] found id: ""
	I1206 10:11:23.022379  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.022404  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:23.022423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:23.022532  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:23.055611  293728 cri.go:89] found id: ""
	I1206 10:11:23.055634  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.055643  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:23.055649  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:23.055708  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:23.080752  293728 cri.go:89] found id: ""
	I1206 10:11:23.080828  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.080850  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:23.080870  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:23.080965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:23.106107  293728 cri.go:89] found id: ""
	I1206 10:11:23.106134  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.106143  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:23.106150  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:23.106212  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:23.132303  293728 cri.go:89] found id: ""
	I1206 10:11:23.132327  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.132335  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:23.132342  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:23.132408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:23.156632  293728 cri.go:89] found id: ""
	I1206 10:11:23.156697  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.156712  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:23.156719  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:23.156775  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:23.180697  293728 cri.go:89] found id: ""
	I1206 10:11:23.180764  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.180777  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:23.180784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:23.180842  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:23.208267  293728 cri.go:89] found id: ""
	I1206 10:11:23.208341  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.208364  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:23.208387  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:23.208425  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:23.292598  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:23.283687   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.284573   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.286441   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.287115   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.288724   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:23.283687   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.284573   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.286441   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.287115   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.288724   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:23.292618  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:23.292631  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:23.318604  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:23.318641  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:23.352649  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:23.352676  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:23.411769  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:23.411803  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:25.925870  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:25.936619  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:25.936701  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:25.963699  293728 cri.go:89] found id: ""
	I1206 10:11:25.963722  293728 logs.go:282] 0 containers: []
	W1206 10:11:25.963731  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:25.963738  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:25.963802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:25.995991  293728 cri.go:89] found id: ""
	I1206 10:11:25.996066  293728 logs.go:282] 0 containers: []
	W1206 10:11:25.996088  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:25.996106  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:25.996196  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:26.030700  293728 cri.go:89] found id: ""
	I1206 10:11:26.030728  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.030738  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:26.030745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:26.030809  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:26.066012  293728 cri.go:89] found id: ""
	I1206 10:11:26.066044  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.066054  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:26.066060  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:26.066125  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:26.092723  293728 cri.go:89] found id: ""
	I1206 10:11:26.092753  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.092763  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:26.092769  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:26.092837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:26.120031  293728 cri.go:89] found id: ""
	I1206 10:11:26.120108  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.120125  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:26.120132  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:26.120198  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:26.147104  293728 cri.go:89] found id: ""
	I1206 10:11:26.147131  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.147152  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:26.147158  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:26.147257  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:26.173188  293728 cri.go:89] found id: ""
	I1206 10:11:26.173212  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.173221  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:26.173230  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:26.173273  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:26.259536  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:26.250765   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.251710   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253385   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253690   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.255208   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:26.250765   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.251710   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253385   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253690   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.255208   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:26.259581  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:26.259596  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:26.288770  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:26.288853  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:26.318991  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:26.319082  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:26.377710  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:26.377743  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:28.892920  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:28.903557  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:28.903622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:28.928667  293728 cri.go:89] found id: ""
	I1206 10:11:28.928691  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.928699  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:28.928707  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:28.928767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:28.953528  293728 cri.go:89] found id: ""
	I1206 10:11:28.953554  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.953562  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:28.953568  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:28.953626  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:28.981995  293728 cri.go:89] found id: ""
	I1206 10:11:28.982022  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.982031  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:28.982037  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:28.982101  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:29.021133  293728 cri.go:89] found id: ""
	I1206 10:11:29.021161  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.021170  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:29.021177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:29.021244  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:29.051961  293728 cri.go:89] found id: ""
	I1206 10:11:29.052044  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.052056  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:29.052063  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:29.052157  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:29.076239  293728 cri.go:89] found id: ""
	I1206 10:11:29.076260  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.076268  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:29.076274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:29.076331  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:29.100533  293728 cri.go:89] found id: ""
	I1206 10:11:29.100568  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.100577  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:29.100583  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:29.100642  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:29.125877  293728 cri.go:89] found id: ""
	I1206 10:11:29.125900  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.125909  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:29.125917  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:29.125929  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:29.184407  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:29.184441  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:29.198478  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:29.198553  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:29.291075  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:29.280844   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.281788   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285131   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285582   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.287240   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:29.280844   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.281788   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285131   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285582   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.287240   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:29.291096  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:29.291109  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:29.317026  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:29.317059  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:31.845985  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:31.857066  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:31.857145  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:31.882982  293728 cri.go:89] found id: ""
	I1206 10:11:31.883059  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.883081  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:31.883101  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:31.883187  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:31.908108  293728 cri.go:89] found id: ""
	I1206 10:11:31.908138  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.908148  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:31.908154  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:31.908244  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:31.933164  293728 cri.go:89] found id: ""
	I1206 10:11:31.933188  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.933197  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:31.933204  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:31.933261  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:31.961760  293728 cri.go:89] found id: ""
	I1206 10:11:31.961784  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.961792  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:31.961798  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:31.961864  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:31.993806  293728 cri.go:89] found id: ""
	I1206 10:11:31.993836  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.993845  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:31.993851  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:31.993915  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:32.025453  293728 cri.go:89] found id: ""
	I1206 10:11:32.025480  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.025489  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:32.025496  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:32.025556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:32.053138  293728 cri.go:89] found id: ""
	I1206 10:11:32.053160  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.053171  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:32.053177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:32.053236  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:32.084984  293728 cri.go:89] found id: ""
	I1206 10:11:32.085009  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.085018  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:32.085027  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:32.085058  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:32.113246  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:32.113276  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:32.170516  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:32.170553  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:32.184767  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:32.184797  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:32.266194  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:32.257320   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.258649   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.259490   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.260223   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.261917   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:32.257320   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.258649   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.259490   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.260223   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.261917   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:32.266261  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:32.266289  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:34.798474  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:34.809168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:34.809239  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:34.837292  293728 cri.go:89] found id: ""
	I1206 10:11:34.837314  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.837322  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:34.837329  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:34.837387  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:34.863331  293728 cri.go:89] found id: ""
	I1206 10:11:34.863353  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.863362  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:34.863369  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:34.863465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:34.893355  293728 cri.go:89] found id: ""
	I1206 10:11:34.893379  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.893388  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:34.893395  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:34.893452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:34.919127  293728 cri.go:89] found id: ""
	I1206 10:11:34.919153  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.919162  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:34.919169  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:34.919228  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:34.948423  293728 cri.go:89] found id: ""
	I1206 10:11:34.948448  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.948458  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:34.948467  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:34.948526  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:34.984476  293728 cri.go:89] found id: ""
	I1206 10:11:34.984503  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.984513  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:34.984520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:34.984579  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:35.017804  293728 cri.go:89] found id: ""
	I1206 10:11:35.017831  293728 logs.go:282] 0 containers: []
	W1206 10:11:35.017840  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:35.017847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:35.017955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:35.049243  293728 cri.go:89] found id: ""
	I1206 10:11:35.049270  293728 logs.go:282] 0 containers: []
	W1206 10:11:35.049279  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:35.049288  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:35.049300  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:35.109333  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:35.109371  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:35.123612  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:35.123643  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:35.191474  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:35.181616   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.182533   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184226   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184809   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.186401   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:35.181616   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.182533   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184226   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184809   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.186401   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:35.191495  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:35.191509  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:35.217926  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:35.218007  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:37.758372  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:37.769553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:37.769625  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:37.799573  293728 cri.go:89] found id: ""
	I1206 10:11:37.799606  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.799617  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:37.799626  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:37.799697  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:37.828542  293728 cri.go:89] found id: ""
	I1206 10:11:37.828580  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.828589  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:37.828595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:37.828670  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:37.854197  293728 cri.go:89] found id: ""
	I1206 10:11:37.854223  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.854233  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:37.854239  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:37.854299  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:37.879147  293728 cri.go:89] found id: ""
	I1206 10:11:37.879220  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.879243  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:37.879261  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:37.879346  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:37.905390  293728 cri.go:89] found id: ""
	I1206 10:11:37.905412  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.905421  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:37.905428  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:37.905533  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:37.933187  293728 cri.go:89] found id: ""
	I1206 10:11:37.933251  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.933266  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:37.933273  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:37.933333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:37.957719  293728 cri.go:89] found id: ""
	I1206 10:11:37.957743  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.957756  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:37.957763  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:37.957823  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:37.991726  293728 cri.go:89] found id: ""
	I1206 10:11:37.991755  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.991765  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:37.991775  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:37.991787  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:38.072266  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:38.063102   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.063715   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.065465   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.066011   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.067888   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:38.063102   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.063715   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.065465   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.066011   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.067888   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:38.072293  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:38.072308  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:38.100264  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:38.100302  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:38.128959  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:38.128989  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:38.186487  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:38.186517  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:40.700896  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:40.711768  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:40.711841  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:40.737641  293728 cri.go:89] found id: ""
	I1206 10:11:40.737664  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.737675  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:40.737681  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:40.737740  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:40.763410  293728 cri.go:89] found id: ""
	I1206 10:11:40.763437  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.763447  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:40.763453  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:40.763521  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:40.788254  293728 cri.go:89] found id: ""
	I1206 10:11:40.788277  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.788287  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:40.788293  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:40.788351  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:40.812429  293728 cri.go:89] found id: ""
	I1206 10:11:40.812454  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.812464  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:40.812470  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:40.812577  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:40.836598  293728 cri.go:89] found id: ""
	I1206 10:11:40.836623  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.836632  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:40.836639  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:40.836699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:40.865558  293728 cri.go:89] found id: ""
	I1206 10:11:40.865584  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.865593  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:40.865600  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:40.865658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:40.890394  293728 cri.go:89] found id: ""
	I1206 10:11:40.890419  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.890428  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:40.890434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:40.890494  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:40.919443  293728 cri.go:89] found id: ""
	I1206 10:11:40.919471  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.919480  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:40.919489  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:40.919501  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:40.932761  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:40.932788  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:41.018904  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:41.007696   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.008625   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.010702   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.011857   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.013002   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:41.007696   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.008625   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.010702   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.011857   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.013002   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:41.018927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:41.018942  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:41.049613  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:41.049648  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:41.077525  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:41.077552  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:43.637314  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:43.648009  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:43.648084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:43.673268  293728 cri.go:89] found id: ""
	I1206 10:11:43.673291  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.673299  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:43.673306  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:43.673363  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:43.698533  293728 cri.go:89] found id: ""
	I1206 10:11:43.698563  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.698573  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:43.698579  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:43.698666  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:43.726409  293728 cri.go:89] found id: ""
	I1206 10:11:43.726434  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.726443  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:43.726449  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:43.726524  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:43.753336  293728 cri.go:89] found id: ""
	I1206 10:11:43.753361  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.753371  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:43.753377  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:43.753468  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:43.778503  293728 cri.go:89] found id: ""
	I1206 10:11:43.778526  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.778535  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:43.778541  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:43.778622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:43.806530  293728 cri.go:89] found id: ""
	I1206 10:11:43.806554  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.806564  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:43.806570  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:43.806652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:43.831543  293728 cri.go:89] found id: ""
	I1206 10:11:43.831570  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.831579  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:43.831585  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:43.831644  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:43.856767  293728 cri.go:89] found id: ""
	I1206 10:11:43.856791  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.856800  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:43.856808  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:43.856821  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:43.926714  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:43.918532   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.919086   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.920754   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.921218   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.922816   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:43.918532   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.919086   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.920754   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.921218   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.922816   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:43.926736  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:43.926751  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:43.953140  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:43.953176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:43.986579  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:43.986611  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:44.046797  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:44.046832  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:46.561087  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:46.574475  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:46.574548  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:46.603568  293728 cri.go:89] found id: ""
	I1206 10:11:46.603593  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.603601  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:46.603608  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:46.603688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:46.629999  293728 cri.go:89] found id: ""
	I1206 10:11:46.630024  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.630034  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:46.630040  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:46.630120  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:46.657373  293728 cri.go:89] found id: ""
	I1206 10:11:46.657399  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.657408  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:46.657414  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:46.657472  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:46.682131  293728 cri.go:89] found id: ""
	I1206 10:11:46.682157  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.682166  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:46.682172  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:46.682229  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:46.712112  293728 cri.go:89] found id: ""
	I1206 10:11:46.712184  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.712201  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:46.712209  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:46.712273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:46.737272  293728 cri.go:89] found id: ""
	I1206 10:11:46.737308  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.737317  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:46.737323  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:46.737402  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:46.762747  293728 cri.go:89] found id: ""
	I1206 10:11:46.762773  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.762782  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:46.762814  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:46.762904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:46.789056  293728 cri.go:89] found id: ""
	I1206 10:11:46.789092  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.789101  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:46.789110  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:46.789122  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:46.852031  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:46.843591   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.844469   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846096   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846414   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.847930   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:46.843591   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.844469   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846096   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846414   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.847930   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:46.852055  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:46.852068  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:46.878458  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:46.878490  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:46.909497  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:46.909523  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:46.966671  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:46.966706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:49.484723  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:49.499040  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:49.499143  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:49.530152  293728 cri.go:89] found id: ""
	I1206 10:11:49.530195  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.530204  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:49.530228  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:49.530311  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:49.556277  293728 cri.go:89] found id: ""
	I1206 10:11:49.556302  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.556311  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:49.556317  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:49.556422  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:49.582278  293728 cri.go:89] found id: ""
	I1206 10:11:49.582303  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.582312  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:49.582318  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:49.582386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:49.608504  293728 cri.go:89] found id: ""
	I1206 10:11:49.608529  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.608538  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:49.608544  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:49.608624  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:49.633347  293728 cri.go:89] found id: ""
	I1206 10:11:49.633414  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.633429  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:49.633436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:49.633495  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:49.658195  293728 cri.go:89] found id: ""
	I1206 10:11:49.658223  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.658233  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:49.658240  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:49.658297  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:49.691086  293728 cri.go:89] found id: ""
	I1206 10:11:49.691112  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.691122  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:49.691128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:49.691213  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:49.716625  293728 cri.go:89] found id: ""
	I1206 10:11:49.716652  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.716661  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:49.716669  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:49.716684  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:49.778048  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:49.778093  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:49.792187  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:49.792216  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:49.858528  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:49.849703   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.850362   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852120   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852678   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.854314   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:49.849703   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.850362   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852120   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852678   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.854314   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:49.858551  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:49.858566  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:49.884659  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:49.884691  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:52.413397  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:52.424250  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:52.424322  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:52.454481  293728 cri.go:89] found id: ""
	I1206 10:11:52.454557  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.454573  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:52.454581  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:52.454642  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:52.487281  293728 cri.go:89] found id: ""
	I1206 10:11:52.487315  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.487325  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:52.487331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:52.487408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:52.522975  293728 cri.go:89] found id: ""
	I1206 10:11:52.523008  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.523025  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:52.523032  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:52.523102  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:52.557389  293728 cri.go:89] found id: ""
	I1206 10:11:52.557421  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.557430  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:52.557436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:52.557494  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:52.583449  293728 cri.go:89] found id: ""
	I1206 10:11:52.583474  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.583483  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:52.583490  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:52.583608  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:52.608370  293728 cri.go:89] found id: ""
	I1206 10:11:52.608412  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.608422  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:52.608429  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:52.608499  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:52.637950  293728 cri.go:89] found id: ""
	I1206 10:11:52.638026  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.638051  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:52.638069  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:52.638160  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:52.663271  293728 cri.go:89] found id: ""
	I1206 10:11:52.663349  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.663413  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:52.663443  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:52.663464  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:52.721303  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:52.721339  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:52.735517  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:52.735548  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:52.806629  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:52.798101   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.799086   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800264   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800722   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.802387   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:52.798101   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.799086   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800264   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800722   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.802387   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:52.806652  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:52.806666  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:52.834909  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:52.834944  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:55.365104  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:55.376039  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:55.376112  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:55.401088  293728 cri.go:89] found id: ""
	I1206 10:11:55.401114  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.401123  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:55.401130  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:55.401187  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:55.426712  293728 cri.go:89] found id: ""
	I1206 10:11:55.426735  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.426744  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:55.426752  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:55.426808  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:55.453355  293728 cri.go:89] found id: ""
	I1206 10:11:55.453433  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.453449  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:55.453456  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:55.453524  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:55.482694  293728 cri.go:89] found id: ""
	I1206 10:11:55.482786  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.482809  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:55.482831  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:55.482965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:55.517524  293728 cri.go:89] found id: ""
	I1206 10:11:55.517567  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.517576  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:55.517582  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:55.517651  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:55.552808  293728 cri.go:89] found id: ""
	I1206 10:11:55.552887  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.552919  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:55.552943  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:55.553051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:55.582318  293728 cri.go:89] found id: ""
	I1206 10:11:55.582391  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.582413  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:55.582435  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:55.582545  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:55.611979  293728 cri.go:89] found id: ""
	I1206 10:11:55.612012  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.612021  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:55.612030  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:55.612043  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:55.641663  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:55.641691  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:55.699247  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:55.699281  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:55.714284  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:55.714312  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:55.779980  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:55.771718   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.772511   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774153   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774506   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.776084   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:55.771718   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.772511   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774153   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774506   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.776084   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:55.780002  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:55.780020  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:58.307533  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:58.318444  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:58.318517  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:58.346128  293728 cri.go:89] found id: ""
	I1206 10:11:58.346181  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.346194  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:58.346202  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:58.346276  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:58.370957  293728 cri.go:89] found id: ""
	I1206 10:11:58.370992  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.371001  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:58.371013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:58.371093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:58.397685  293728 cri.go:89] found id: ""
	I1206 10:11:58.397717  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.397726  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:58.397732  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:58.397803  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:58.426933  293728 cri.go:89] found id: ""
	I1206 10:11:58.426959  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.426967  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:58.426973  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:58.427051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:58.456330  293728 cri.go:89] found id: ""
	I1206 10:11:58.456365  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.456375  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:58.456381  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:58.456448  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:58.494975  293728 cri.go:89] found id: ""
	I1206 10:11:58.495018  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.495027  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:58.495034  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:58.495106  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:58.532346  293728 cri.go:89] found id: ""
	I1206 10:11:58.532379  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.532389  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:58.532395  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:58.532465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:58.558540  293728 cri.go:89] found id: ""
	I1206 10:11:58.558576  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.558584  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:58.558593  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:58.558605  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:58.573220  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:58.573249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:58.639437  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:58.631044   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.631569   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633054   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633435   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.634868   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:58.631044   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.631569   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633054   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633435   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.634868   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:58.639512  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:58.639535  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:58.664823  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:58.664861  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:58.692934  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:58.692966  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:01.250858  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:01.262935  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:01.263112  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:01.291081  293728 cri.go:89] found id: ""
	I1206 10:12:01.291107  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.291117  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:01.291123  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:01.291204  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:01.318105  293728 cri.go:89] found id: ""
	I1206 10:12:01.318138  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.318147  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:01.318168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:01.318249  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:01.344419  293728 cri.go:89] found id: ""
	I1206 10:12:01.344488  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.344514  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:01.344528  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:01.344601  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:01.370652  293728 cri.go:89] found id: ""
	I1206 10:12:01.370677  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.370686  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:01.370693  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:01.370751  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:01.397501  293728 cri.go:89] found id: ""
	I1206 10:12:01.397528  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.397538  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:01.397544  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:01.397603  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:01.423444  293728 cri.go:89] found id: ""
	I1206 10:12:01.423517  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.423541  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:01.423563  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:01.423646  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:01.453268  293728 cri.go:89] found id: ""
	I1206 10:12:01.453294  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.453303  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:01.453316  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:01.453417  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:01.481810  293728 cri.go:89] found id: ""
	I1206 10:12:01.481890  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.481915  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:01.481932  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:01.481959  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:01.538994  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:01.539079  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:01.553293  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:01.553320  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:01.623989  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:01.612749   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.615513   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.616460   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618024   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618347   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:01.612749   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.615513   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.616460   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618024   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618347   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:01.624063  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:01.624085  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:01.649724  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:01.649757  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:04.179886  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:04.191201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:04.191273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:04.216964  293728 cri.go:89] found id: ""
	I1206 10:12:04.217045  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.217065  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:04.217072  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:04.217168  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:04.252840  293728 cri.go:89] found id: ""
	I1206 10:12:04.252875  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.252884  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:04.252891  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:04.252965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:04.281583  293728 cri.go:89] found id: ""
	I1206 10:12:04.281614  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.281623  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:04.281629  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:04.281695  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:04.311479  293728 cri.go:89] found id: ""
	I1206 10:12:04.311547  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.311571  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:04.311585  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:04.311658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:04.337184  293728 cri.go:89] found id: ""
	I1206 10:12:04.337213  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.337221  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:04.337228  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:04.337307  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:04.363672  293728 cri.go:89] found id: ""
	I1206 10:12:04.363705  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.363715  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:04.363738  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:04.363836  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:04.394214  293728 cri.go:89] found id: ""
	I1206 10:12:04.394240  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.394249  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:04.394256  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:04.394367  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:04.419254  293728 cri.go:89] found id: ""
	I1206 10:12:04.419335  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.419359  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:04.419403  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:04.419437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:04.451555  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:04.451582  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:04.509304  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:04.509336  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:04.523821  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:04.523848  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:04.591566  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:04.581768   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.583295   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.584171   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.585971   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.586453   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:04.581768   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.583295   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.584171   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.585971   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.586453   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:04.591591  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:04.591604  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:07.121570  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:07.132505  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:07.132585  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:07.157021  293728 cri.go:89] found id: ""
	I1206 10:12:07.157047  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.157056  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:07.157063  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:07.157151  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:07.182478  293728 cri.go:89] found id: ""
	I1206 10:12:07.182510  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.182519  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:07.182526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:07.182597  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:07.212401  293728 cri.go:89] found id: ""
	I1206 10:12:07.212424  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.212433  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:07.212439  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:07.212498  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:07.246228  293728 cri.go:89] found id: ""
	I1206 10:12:07.246255  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.246264  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:07.246271  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:07.246333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:07.273777  293728 cri.go:89] found id: ""
	I1206 10:12:07.273802  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.273811  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:07.273817  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:07.273878  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:07.302425  293728 cri.go:89] found id: ""
	I1206 10:12:07.302464  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.302473  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:07.302481  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:07.302556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:07.328379  293728 cri.go:89] found id: ""
	I1206 10:12:07.328403  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.328412  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:07.328418  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:07.328476  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:07.358727  293728 cri.go:89] found id: ""
	I1206 10:12:07.358751  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.358760  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:07.358771  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:07.358811  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:07.415522  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:07.415561  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:07.429309  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:07.429338  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:07.497723  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:07.488450   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.488945   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.490709   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.491285   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.492907   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:07.488450   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.488945   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.490709   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.491285   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.492907   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:07.497749  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:07.497762  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:07.524612  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:07.524648  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:10.055528  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:10.066871  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:10.066968  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:10.092582  293728 cri.go:89] found id: ""
	I1206 10:12:10.092611  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.092622  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:10.092630  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:10.092695  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:10.120230  293728 cri.go:89] found id: ""
	I1206 10:12:10.120321  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.120347  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:10.120366  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:10.120465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:10.146387  293728 cri.go:89] found id: ""
	I1206 10:12:10.146464  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.146489  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:10.146508  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:10.146582  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:10.173457  293728 cri.go:89] found id: ""
	I1206 10:12:10.173484  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.173493  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:10.173500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:10.173592  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:10.202187  293728 cri.go:89] found id: ""
	I1206 10:12:10.202262  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.202285  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:10.202303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:10.202393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:10.232838  293728 cri.go:89] found id: ""
	I1206 10:12:10.232901  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.232922  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:10.232940  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:10.233025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:10.267445  293728 cri.go:89] found id: ""
	I1206 10:12:10.267520  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.267543  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:10.267561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:10.267650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:10.298314  293728 cri.go:89] found id: ""
	I1206 10:12:10.298389  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.298412  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:10.298434  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:10.298472  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:10.325341  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:10.325374  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:10.385049  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:10.385081  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:10.398513  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:10.398540  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:10.463844  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:10.454441   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.455251   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457119   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457874   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.459632   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:10.454441   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.455251   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457119   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457874   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.459632   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:10.463908  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:10.463945  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:12.991294  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:13.006571  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:13.006645  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:13.040431  293728 cri.go:89] found id: ""
	I1206 10:12:13.040457  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.040466  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:13.040479  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:13.040544  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:13.066025  293728 cri.go:89] found id: ""
	I1206 10:12:13.066047  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.066056  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:13.066062  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:13.066134  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:13.093459  293728 cri.go:89] found id: ""
	I1206 10:12:13.093482  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.093491  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:13.093496  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:13.093556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:13.118066  293728 cri.go:89] found id: ""
	I1206 10:12:13.118089  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.118098  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:13.118104  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:13.118162  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:13.145619  293728 cri.go:89] found id: ""
	I1206 10:12:13.145685  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.145704  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:13.145711  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:13.145770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:13.174833  293728 cri.go:89] found id: ""
	I1206 10:12:13.174857  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.174866  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:13.174872  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:13.174934  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:13.200490  293728 cri.go:89] found id: ""
	I1206 10:12:13.200517  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.200526  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:13.200532  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:13.200590  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:13.243683  293728 cri.go:89] found id: ""
	I1206 10:12:13.243709  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.243718  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:13.243726  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:13.243741  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:13.279303  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:13.279330  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:13.337861  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:13.337897  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:13.351559  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:13.351634  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:13.413990  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:13.406460   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.406956   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408410   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408802   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.410225   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:13.406460   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.406956   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408410   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408802   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.410225   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:13.414012  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:13.414028  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:15.940438  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:15.952379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:15.952452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:15.997713  293728 cri.go:89] found id: ""
	I1206 10:12:15.997741  293728 logs.go:282] 0 containers: []
	W1206 10:12:15.997749  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:15.997755  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:15.997814  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:16.027447  293728 cri.go:89] found id: ""
	I1206 10:12:16.027477  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.027486  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:16.027494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:16.027552  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:16.056201  293728 cri.go:89] found id: ""
	I1206 10:12:16.056224  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.056232  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:16.056238  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:16.056296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:16.080619  293728 cri.go:89] found id: ""
	I1206 10:12:16.080641  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.080650  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:16.080657  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:16.080736  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:16.106294  293728 cri.go:89] found id: ""
	I1206 10:12:16.106316  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.106324  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:16.106330  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:16.106393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:16.131999  293728 cri.go:89] found id: ""
	I1206 10:12:16.132026  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.132036  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:16.132042  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:16.132103  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:16.156693  293728 cri.go:89] found id: ""
	I1206 10:12:16.156719  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.156734  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:16.156740  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:16.156819  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:16.182391  293728 cri.go:89] found id: ""
	I1206 10:12:16.182416  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.182426  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:16.182436  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:16.182467  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:16.262961  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:16.251126   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.252302   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.253220   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257326   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257858   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:16.251126   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.252302   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.253220   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257326   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257858   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:16.262991  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:16.263024  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:16.292146  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:16.292180  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:16.323803  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:16.323830  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:16.382496  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:16.382530  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:18.896413  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:18.906898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:18.907007  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:18.930731  293728 cri.go:89] found id: ""
	I1206 10:12:18.930763  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.930773  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:18.930779  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:18.930844  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:18.955309  293728 cri.go:89] found id: ""
	I1206 10:12:18.955334  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.955343  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:18.955349  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:18.955428  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:18.987453  293728 cri.go:89] found id: ""
	I1206 10:12:18.987480  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.987489  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:18.987495  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:18.987559  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:19.016315  293728 cri.go:89] found id: ""
	I1206 10:12:19.016359  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.016369  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:19.016376  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:19.016457  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:19.046838  293728 cri.go:89] found id: ""
	I1206 10:12:19.046914  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.046939  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:19.046958  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:19.047088  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:19.076303  293728 cri.go:89] found id: ""
	I1206 10:12:19.076339  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.076348  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:19.076355  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:19.076424  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:19.100478  293728 cri.go:89] found id: ""
	I1206 10:12:19.100505  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.100514  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:19.100520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:19.100600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:19.125238  293728 cri.go:89] found id: ""
	I1206 10:12:19.125303  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.125317  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:19.125327  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:19.125338  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:19.181824  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:19.181858  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:19.195937  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:19.195963  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:19.288898  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:19.278661   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.279479   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.281324   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.282074   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.284115   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:19.278661   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.279479   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.281324   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.282074   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.284115   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:19.288922  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:19.288935  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:19.314454  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:19.314487  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:21.845581  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:21.856143  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:21.856207  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:21.880174  293728 cri.go:89] found id: ""
	I1206 10:12:21.880197  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.880206  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:21.880212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:21.880273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:21.906162  293728 cri.go:89] found id: ""
	I1206 10:12:21.906195  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.906204  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:21.906209  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:21.906277  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:21.929912  293728 cri.go:89] found id: ""
	I1206 10:12:21.929936  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.929945  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:21.929951  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:21.930017  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:21.955257  293728 cri.go:89] found id: ""
	I1206 10:12:21.955288  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.955297  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:21.955303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:21.955403  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:21.988656  293728 cri.go:89] found id: ""
	I1206 10:12:21.988682  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.988691  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:21.988698  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:21.988766  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:22.026205  293728 cri.go:89] found id: ""
	I1206 10:12:22.026232  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.026241  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:22.026248  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:22.026321  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:22.056883  293728 cri.go:89] found id: ""
	I1206 10:12:22.056906  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.056915  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:22.056923  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:22.056983  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:22.087245  293728 cri.go:89] found id: ""
	I1206 10:12:22.087269  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.087277  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:22.087286  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:22.087296  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:22.148181  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:22.148213  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:22.161924  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:22.161952  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:22.238449  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:22.229386   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.230236   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.231860   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.232500   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.234010   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:22.229386   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.230236   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.231860   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.232500   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.234010   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:22.238523  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:22.238550  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:22.268691  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:22.268765  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:24.800715  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:24.811471  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:24.811557  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:24.836234  293728 cri.go:89] found id: ""
	I1206 10:12:24.836261  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.836270  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:24.836277  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:24.836335  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:24.861915  293728 cri.go:89] found id: ""
	I1206 10:12:24.861942  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.861951  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:24.861957  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:24.862015  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:24.886931  293728 cri.go:89] found id: ""
	I1206 10:12:24.886958  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.886968  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:24.886974  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:24.887058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:24.913606  293728 cri.go:89] found id: ""
	I1206 10:12:24.913633  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.913642  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:24.913649  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:24.913708  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:24.942656  293728 cri.go:89] found id: ""
	I1206 10:12:24.942690  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.942699  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:24.942706  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:24.942772  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:24.973528  293728 cri.go:89] found id: ""
	I1206 10:12:24.973563  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.973572  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:24.973579  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:24.973654  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:25.011969  293728 cri.go:89] found id: ""
	I1206 10:12:25.012007  293728 logs.go:282] 0 containers: []
	W1206 10:12:25.012017  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:25.012024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:25.012105  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:25.041306  293728 cri.go:89] found id: ""
	I1206 10:12:25.041340  293728 logs.go:282] 0 containers: []
	W1206 10:12:25.041349  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:25.041363  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:25.041377  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:25.068464  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:25.068503  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:25.098409  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:25.098436  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:25.156122  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:25.156158  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:25.170373  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:25.170405  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:25.248624  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:25.240035   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.240794   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.242472   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.243030   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.244596   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:25.240035   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.240794   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.242472   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.243030   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.244596   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:27.748906  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:27.759522  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:27.759591  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:27.785227  293728 cri.go:89] found id: ""
	I1206 10:12:27.785250  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.785258  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:27.785264  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:27.785319  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:27.810979  293728 cri.go:89] found id: ""
	I1206 10:12:27.811011  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.811021  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:27.811028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:27.811085  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:27.837232  293728 cri.go:89] found id: ""
	I1206 10:12:27.837298  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.837313  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:27.837320  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:27.837376  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:27.861601  293728 cri.go:89] found id: ""
	I1206 10:12:27.861625  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.861634  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:27.861641  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:27.861699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:27.886862  293728 cri.go:89] found id: ""
	I1206 10:12:27.886887  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.886897  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:27.886903  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:27.886960  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:27.911189  293728 cri.go:89] found id: ""
	I1206 10:12:27.911213  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.911222  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:27.911229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:27.911285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:27.935326  293728 cri.go:89] found id: ""
	I1206 10:12:27.935352  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.935361  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:27.935368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:27.935452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:27.959524  293728 cri.go:89] found id: ""
	I1206 10:12:27.959545  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.959555  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:27.959564  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:27.959575  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:28.028099  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:28.028143  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:28.048460  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:28.048488  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:28.118674  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:28.109022   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.109888   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.111697   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.112355   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.114062   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:28.109022   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.109888   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.111697   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.112355   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.114062   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:28.118697  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:28.118709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:28.144591  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:28.144630  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:30.673088  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:30.683869  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:30.683949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:30.708341  293728 cri.go:89] found id: ""
	I1206 10:12:30.708364  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.708372  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:30.708379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:30.708434  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:30.734236  293728 cri.go:89] found id: ""
	I1206 10:12:30.734261  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.734270  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:30.734276  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:30.734333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:30.760476  293728 cri.go:89] found id: ""
	I1206 10:12:30.760499  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.760508  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:30.760520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:30.760580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:30.785771  293728 cri.go:89] found id: ""
	I1206 10:12:30.785793  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.785802  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:30.785808  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:30.785871  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:30.814408  293728 cri.go:89] found id: ""
	I1206 10:12:30.814431  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.814439  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:30.814445  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:30.814504  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:30.840084  293728 cri.go:89] found id: ""
	I1206 10:12:30.840108  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.840117  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:30.840124  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:30.840183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:30.865698  293728 cri.go:89] found id: ""
	I1206 10:12:30.865723  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.865732  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:30.865745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:30.865807  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:30.895469  293728 cri.go:89] found id: ""
	I1206 10:12:30.895538  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.895553  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:30.895562  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:30.895573  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:30.952609  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:30.952644  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:30.966729  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:30.966758  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:31.059967  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:31.049168   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.050975   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.051825   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.053830   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.054324   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:31.049168   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.050975   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.051825   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.053830   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.054324   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:31.059992  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:31.060006  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:31.087739  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:31.087785  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:33.618907  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:33.633558  293728 out.go:203] 
	W1206 10:12:33.636407  293728 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1206 10:12:33.636439  293728 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1206 10:12:33.636448  293728 out.go:285] * Related issues:
	W1206 10:12:33.636468  293728 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1206 10:12:33.636488  293728 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1206 10:12:33.640150  293728 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.180738951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.180841500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181051963Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181150721Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181229926Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181300302Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181371777Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181434227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181504595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181620526Z" level=info msg="Connect containerd service"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.182068703Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.183078485Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193499317Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193692279Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193840088Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193788608Z" level=info msg="Start recovering state"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235102688Z" level=info msg="Start event monitor"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235301393Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235445231Z" level=info msg="Start streaming server"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235540452Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235788569Z" level=info msg="runtime interface starting up..."
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235878794Z" level=info msg="starting plugins..."
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235966762Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:06:31 newest-cni-387337 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.238179492Z" level=info msg="containerd successfully booted in 0.085161s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:36.928719   13448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:36.929116   13448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:36.930896   13448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:36.931499   13448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:36.933083   13448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:12:36 up  1:55,  0 user,  load average: 0.75, 0.68, 1.28
	Linux newest-cni-387337 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:12:33 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:34 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 06 10:12:34 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:34 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:34 newest-cni-387337 kubelet[13324]: E1206 10:12:34.301839   13324 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:34 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:34 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:34 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 06 10:12:34 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:34 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:35 newest-cni-387337 kubelet[13329]: E1206 10:12:35.056169   13329 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:35 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:35 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:35 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 06 10:12:35 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:35 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:35 newest-cni-387337 kubelet[13349]: E1206 10:12:35.779331   13349 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:35 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:35 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:36 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 06 10:12:36 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:36 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:36 newest-cni-387337 kubelet[13355]: E1206 10:12:36.544245   13355 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:36 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:36 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (351.300929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-387337" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (373.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:09:42.815857    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:10:55.754612    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:11:09.863140    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:11:36.062087    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:12:32.928815    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:12:57.331020    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:15:55.754528    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:16:05.880653    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1206 10:16:08.254660    4292 config.go:182] Loaded profile config "kindnet-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:16:09.863221    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:16:19.147882    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:16:36.061951    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:17:18.823963    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:17:57.331269    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359: exit status 2 (499.53711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-257359" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-257359
helpers_test.go:243: (dbg) docker inspect no-preload-257359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	        "Created": "2025-12-06T09:52:27.333376101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:02:50.853067046Z",
	            "FinishedAt": "2025-12-06T10:02:49.497503356Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hostname",
	        "HostsPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hosts",
	        "LogPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26-json.log",
	        "Name": "/no-preload-257359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-257359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-257359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	                "LowerDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-257359",
	                "Source": "/var/lib/docker/volumes/no-preload-257359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-257359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-257359",
	                "name.minikube.sigs.k8s.io": "no-preload-257359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "263a8cb62ad65d73ef315ff544437f3a15543e9da8e511558b3504b20118eae7",
	            "SandboxKey": "/var/run/docker/netns/263a8cb62ad6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-257359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:cd:c5:1d:17:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b05bfbfa55363c82b2c20e75689dc6d905b9177d9ed6efb1bc4c663e65903cf4",
	                    "EndpointID": "fe68f03ea36cc45569898aaadfae8dde5a2342dd57895d5970718f4ce7302e58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-257359",
	                        "76494ba86a40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359: exit status 2 (483.112579ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-257359 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-257359 logs -n 25: (1.055141088s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-793086 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo systemctl status docker --all --full --no-pager                                                                       │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-793086 sudo systemctl cat docker --no-pager                                                                                       │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo cat /etc/docker/daemon.json                                                                                           │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-793086 sudo docker system info                                                                                                    │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-793086 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-793086 sudo systemctl cat cri-docker --no-pager                                                                                   │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-793086 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo cri-dockerd --version                                                                                                 │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo systemctl status containerd --all --full --no-pager                                                                   │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo systemctl cat containerd --no-pager                                                                                   │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo cat /lib/systemd/system/containerd.service                                                                            │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo cat /etc/containerd/config.toml                                                                                       │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo containerd config dump                                                                                                │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo systemctl status crio --all --full --no-pager                                                                         │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-793086 sudo systemctl cat crio --no-pager                                                                                         │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-793086 sudo crio config                                                                                                           │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ delete  │ -p kindnet-793086                                                                                                                            │ kindnet-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:16 UTC │
	│ start   │ -p calico-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd │ calico-793086  │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:17 UTC │
	│ ssh     │ -p calico-793086 pgrep -a kubelet                                                                                                            │ calico-793086  │ jenkins │ v1.37.0 │ 06 Dec 25 10:17 UTC │ 06 Dec 25 10:17 UTC │
	│ ssh     │ -p calico-793086 sudo cat /etc/nsswitch.conf                                                                                                 │ calico-793086  │ jenkins │ v1.37.0 │ 06 Dec 25 10:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:16:39
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:16:39.134703  326059 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:16:39.134887  326059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:16:39.134918  326059 out.go:374] Setting ErrFile to fd 2...
	I1206 10:16:39.134943  326059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:16:39.135200  326059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:16:39.135696  326059 out.go:368] Setting JSON to false
	I1206 10:16:39.136660  326059 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7151,"bootTime":1765009049,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:16:39.136763  326059 start.go:143] virtualization:  
	I1206 10:16:39.140585  326059 out.go:179] * [calico-793086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:16:39.145118  326059 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:16:39.145218  326059 notify.go:221] Checking for updates...
	I1206 10:16:39.151941  326059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:16:39.155078  326059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:16:39.158141  326059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:16:39.161342  326059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:16:39.164458  326059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:16:39.168198  326059 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:16:39.168301  326059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:16:39.206450  326059 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:16:39.206584  326059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:16:39.262354  326059 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:16:39.251736666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:16:39.262466  326059 docker.go:319] overlay module found
	I1206 10:16:39.265833  326059 out.go:179] * Using the docker driver based on user configuration
	I1206 10:16:39.268850  326059 start.go:309] selected driver: docker
	I1206 10:16:39.268871  326059 start.go:927] validating driver "docker" against <nil>
	I1206 10:16:39.268885  326059 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:16:39.269621  326059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:16:39.325372  326059 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:16:39.316342369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:16:39.325520  326059 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:16:39.325780  326059 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:16:39.328774  326059 out.go:179] * Using Docker driver with root privileges
	I1206 10:16:39.331763  326059 cni.go:84] Creating CNI manager for "calico"
	I1206 10:16:39.331788  326059 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1206 10:16:39.331878  326059 start.go:353] cluster config:
	{Name:calico-793086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-793086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:16:39.335108  326059 out.go:179] * Starting "calico-793086" primary control-plane node in "calico-793086" cluster
	I1206 10:16:39.337793  326059 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:16:39.340763  326059 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:16:39.343646  326059 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 10:16:39.343691  326059 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1206 10:16:39.343702  326059 cache.go:65] Caching tarball of preloaded images
	I1206 10:16:39.343724  326059 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:16:39.343786  326059 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:16:39.343797  326059 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 10:16:39.343910  326059 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/config.json ...
	I1206 10:16:39.343931  326059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/config.json: {Name:mke229bb3c4e030a254f11c1f7d2083aa97eddd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:16:39.362733  326059 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:16:39.362757  326059 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:16:39.362772  326059 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:16:39.362802  326059 start.go:360] acquireMachinesLock for calico-793086: {Name:mk64b597733a1df368fb70457b81f09fcf528524 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:16:39.362918  326059 start.go:364] duration metric: took 96.1µs to acquireMachinesLock for "calico-793086"
	I1206 10:16:39.362950  326059 start.go:93] Provisioning new machine with config: &{Name:calico-793086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-793086 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:16:39.363028  326059 start.go:125] createHost starting for "" (driver="docker")
	I1206 10:16:39.366394  326059 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 10:16:39.366633  326059 start.go:159] libmachine.API.Create for "calico-793086" (driver="docker")
	I1206 10:16:39.366673  326059 client.go:173] LocalClient.Create starting
	I1206 10:16:39.366744  326059 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 10:16:39.366782  326059 main.go:143] libmachine: Decoding PEM data...
	I1206 10:16:39.366805  326059 main.go:143] libmachine: Parsing certificate...
	I1206 10:16:39.366872  326059 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 10:16:39.366896  326059 main.go:143] libmachine: Decoding PEM data...
	I1206 10:16:39.366912  326059 main.go:143] libmachine: Parsing certificate...
	I1206 10:16:39.367290  326059 cli_runner.go:164] Run: docker network inspect calico-793086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 10:16:39.383952  326059 cli_runner.go:211] docker network inspect calico-793086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 10:16:39.384041  326059 network_create.go:284] running [docker network inspect calico-793086] to gather additional debugging logs...
	I1206 10:16:39.384063  326059 cli_runner.go:164] Run: docker network inspect calico-793086
	W1206 10:16:39.399924  326059 cli_runner.go:211] docker network inspect calico-793086 returned with exit code 1
	I1206 10:16:39.399961  326059 network_create.go:287] error running [docker network inspect calico-793086]: docker network inspect calico-793086: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-793086 not found
	I1206 10:16:39.399977  326059 network_create.go:289] output of [docker network inspect calico-793086]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-793086 not found
	
	** /stderr **
	I1206 10:16:39.400081  326059 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:16:39.416695  326059 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
	I1206 10:16:39.417106  326059 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6479799cc46a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:b3:f8:bd:10:a1} reservation:<nil>}
	I1206 10:16:39.417540  326059 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-045bb1cdddf9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:c6:f0:a4:f5:8d} reservation:<nil>}
	I1206 10:16:39.417868  326059 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b05bfbfa5536 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:01:4f:ea:ac:91} reservation:<nil>}
	I1206 10:16:39.418351  326059 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e96e0}
	I1206 10:16:39.418375  326059 network_create.go:124] attempt to create docker network calico-793086 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 10:16:39.418431  326059 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-793086 calico-793086
	I1206 10:16:39.489998  326059 network_create.go:108] docker network calico-793086 192.168.85.0/24 created
	I1206 10:16:39.490027  326059 kic.go:121] calculated static IP "192.168.85.2" for the "calico-793086" container
	I1206 10:16:39.490124  326059 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 10:16:39.507537  326059 cli_runner.go:164] Run: docker volume create calico-793086 --label name.minikube.sigs.k8s.io=calico-793086 --label created_by.minikube.sigs.k8s.io=true
	I1206 10:16:39.532648  326059 oci.go:103] Successfully created a docker volume calico-793086
	I1206 10:16:39.532730  326059 cli_runner.go:164] Run: docker run --rm --name calico-793086-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-793086 --entrypoint /usr/bin/test -v calico-793086:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 10:16:40.096279  326059 oci.go:107] Successfully prepared a docker volume calico-793086
	I1206 10:16:40.096370  326059 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 10:16:40.096383  326059 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 10:16:40.096461  326059 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-793086:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 10:16:44.186335  326059 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-793086:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.089839631s)
	I1206 10:16:44.186372  326059 kic.go:203] duration metric: took 4.08998675s to extract preloaded images to volume ...
	W1206 10:16:44.186534  326059 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 10:16:44.186651  326059 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 10:16:44.241274  326059 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-793086 --name calico-793086 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-793086 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-793086 --network calico-793086 --ip 192.168.85.2 --volume calico-793086:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 10:16:44.556037  326059 cli_runner.go:164] Run: docker container inspect calico-793086 --format={{.State.Running}}
	I1206 10:16:44.577462  326059 cli_runner.go:164] Run: docker container inspect calico-793086 --format={{.State.Status}}
	I1206 10:16:44.608884  326059 cli_runner.go:164] Run: docker exec calico-793086 stat /var/lib/dpkg/alternatives/iptables
	I1206 10:16:44.653923  326059 oci.go:144] the created container "calico-793086" has a running status.
	I1206 10:16:44.653950  326059 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/calico-793086/id_rsa...
	I1206 10:16:44.829710  326059 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/calico-793086/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 10:16:44.855579  326059 cli_runner.go:164] Run: docker container inspect calico-793086 --format={{.State.Status}}
	I1206 10:16:44.878112  326059 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 10:16:44.878134  326059 kic_runner.go:114] Args: [docker exec --privileged calico-793086 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 10:16:44.937532  326059 cli_runner.go:164] Run: docker container inspect calico-793086 --format={{.State.Status}}
	I1206 10:16:44.976348  326059 machine.go:94] provisionDockerMachine start ...
	I1206 10:16:44.976450  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:16:45.018444  326059 main.go:143] libmachine: Using SSH client type: native
	I1206 10:16:45.019755  326059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 10:16:45.019868  326059 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:16:45.020997  326059 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37694->127.0.0.1:33118: read: connection reset by peer
	I1206 10:16:48.178946  326059 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-793086
	
	I1206 10:16:48.178971  326059 ubuntu.go:182] provisioning hostname "calico-793086"
	I1206 10:16:48.179036  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:16:48.199314  326059 main.go:143] libmachine: Using SSH client type: native
	I1206 10:16:48.199686  326059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 10:16:48.199702  326059 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-793086 && echo "calico-793086" | sudo tee /etc/hostname
	I1206 10:16:48.360725  326059 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-793086
	
	I1206 10:16:48.360801  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:16:48.379295  326059 main.go:143] libmachine: Using SSH client type: native
	I1206 10:16:48.379633  326059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1206 10:16:48.379678  326059 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-793086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-793086/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-793086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:16:48.536228  326059 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:16:48.536254  326059 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:16:48.536290  326059 ubuntu.go:190] setting up certificates
	I1206 10:16:48.536299  326059 provision.go:84] configureAuth start
	I1206 10:16:48.536365  326059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-793086
	I1206 10:16:48.554507  326059 provision.go:143] copyHostCerts
	I1206 10:16:48.554588  326059 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:16:48.554602  326059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:16:48.554681  326059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:16:48.554783  326059 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:16:48.554791  326059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:16:48.554822  326059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:16:48.554906  326059 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:16:48.554918  326059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:16:48.554948  326059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:16:48.555005  326059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.calico-793086 san=[127.0.0.1 192.168.85.2 calico-793086 localhost minikube]
	I1206 10:16:48.926652  326059 provision.go:177] copyRemoteCerts
	I1206 10:16:48.926721  326059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:16:48.926760  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:16:48.945843  326059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/calico-793086/id_rsa Username:docker}
	I1206 10:16:49.055550  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:16:49.073347  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 10:16:49.091752  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:16:49.109480  326059 provision.go:87] duration metric: took 573.158021ms to configureAuth
	I1206 10:16:49.109507  326059 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:16:49.109699  326059 config.go:182] Loaded profile config "calico-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:16:49.109707  326059 machine.go:97] duration metric: took 4.133341031s to provisionDockerMachine
	I1206 10:16:49.109713  326059 client.go:176] duration metric: took 9.743029877s to LocalClient.Create
	I1206 10:16:49.109732  326059 start.go:167] duration metric: took 9.743100097s to libmachine.API.Create "calico-793086"
	I1206 10:16:49.109739  326059 start.go:293] postStartSetup for "calico-793086" (driver="docker")
	I1206 10:16:49.109748  326059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:16:49.109805  326059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:16:49.109850  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:16:49.126921  326059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/calico-793086/id_rsa Username:docker}
	I1206 10:16:49.232926  326059 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:16:49.236925  326059 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:16:49.236998  326059 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:16:49.237024  326059 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:16:49.237104  326059 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:16:49.237219  326059 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:16:49.237367  326059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:16:49.246395  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:16:49.266318  326059 start.go:296] duration metric: took 156.564887ms for postStartSetup
	I1206 10:16:49.267104  326059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-793086
	I1206 10:16:49.289893  326059 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/config.json ...
	I1206 10:16:49.290177  326059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:16:49.290217  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:16:49.309989  326059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/calico-793086/id_rsa Username:docker}
	I1206 10:16:49.412585  326059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:16:49.417582  326059 start.go:128] duration metric: took 10.05453594s to createHost
	I1206 10:16:49.417609  326059 start.go:83] releasing machines lock for "calico-793086", held for 10.054676027s
	I1206 10:16:49.417682  326059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-793086
	I1206 10:16:49.434840  326059 ssh_runner.go:195] Run: cat /version.json
	I1206 10:16:49.434927  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:16:49.434866  326059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:16:49.435046  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:16:49.453339  326059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/calico-793086/id_rsa Username:docker}
	I1206 10:16:49.464979  326059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/calico-793086/id_rsa Username:docker}
	I1206 10:16:49.563255  326059 ssh_runner.go:195] Run: systemctl --version
	I1206 10:16:49.656325  326059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:16:49.660787  326059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:16:49.660887  326059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:16:49.688274  326059 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 10:16:49.688295  326059 start.go:496] detecting cgroup driver to use...
	I1206 10:16:49.688329  326059 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:16:49.688378  326059 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:16:49.703186  326059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:16:49.716614  326059 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:16:49.716678  326059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:16:49.734588  326059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:16:49.753438  326059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:16:49.873758  326059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:16:50.018375  326059 docker.go:234] disabling docker service ...
	I1206 10:16:50.018474  326059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:16:50.050190  326059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:16:50.065011  326059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:16:50.182083  326059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:16:50.319574  326059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:16:50.334826  326059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:16:50.353482  326059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:16:50.364394  326059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:16:50.374232  326059 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:16:50.374304  326059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:16:50.384583  326059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:16:50.395097  326059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:16:50.404472  326059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:16:50.413551  326059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:16:50.421975  326059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:16:50.430689  326059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:16:50.439297  326059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:16:50.448045  326059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:16:50.456089  326059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:16:50.463616  326059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:16:50.572497  326059 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:16:50.709635  326059 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:16:50.709758  326059 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:16:50.713601  326059 start.go:564] Will wait 60s for crictl version
	I1206 10:16:50.713728  326059 ssh_runner.go:195] Run: which crictl
	I1206 10:16:50.719547  326059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:16:50.757293  326059 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:16:50.757375  326059 ssh_runner.go:195] Run: containerd --version
	I1206 10:16:50.792768  326059 ssh_runner.go:195] Run: containerd --version
	I1206 10:16:50.818268  326059 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1206 10:16:50.821309  326059 cli_runner.go:164] Run: docker network inspect calico-793086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:16:50.837761  326059 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 10:16:50.841382  326059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:16:50.851141  326059 kubeadm.go:884] updating cluster {Name:calico-793086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-793086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:16:50.851266  326059 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 10:16:50.851339  326059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:16:50.877227  326059 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:16:50.877253  326059 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:16:50.877310  326059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:16:50.901351  326059 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:16:50.901374  326059 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:16:50.901382  326059 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1206 10:16:50.901466  326059 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-793086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-793086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1206 10:16:50.901532  326059 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:16:50.927191  326059 cni.go:84] Creating CNI manager for "calico"
	I1206 10:16:50.927232  326059 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:16:50.927255  326059 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-793086 NodeName:calico-793086 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:16:50.927370  326059 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-793086"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:16:50.927473  326059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 10:16:50.935036  326059 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:16:50.935110  326059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:16:50.942892  326059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1206 10:16:50.955955  326059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 10:16:50.968849  326059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1206 10:16:50.982303  326059 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:16:50.986117  326059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:16:50.995908  326059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:16:51.109069  326059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:16:51.132007  326059 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086 for IP: 192.168.85.2
	I1206 10:16:51.132028  326059 certs.go:195] generating shared ca certs ...
	I1206 10:16:51.132053  326059 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:16:51.132218  326059 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:16:51.132261  326059 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:16:51.132268  326059 certs.go:257] generating profile certs ...
	I1206 10:16:51.132328  326059 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.key
	I1206 10:16:51.132341  326059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt with IP's: []
	I1206 10:16:51.659008  326059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt ...
	I1206 10:16:51.659042  326059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: {Name:mkc5dd9e98b8a6bceb754e7b5641625649b19b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:16:51.659284  326059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.key ...
	I1206 10:16:51.659302  326059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.key: {Name:mk133cc192af4bd911a10071e91693e31842cfc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:16:51.659452  326059 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.key.26901c12
	I1206 10:16:51.659474  326059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.crt.26901c12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 10:16:51.909467  326059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.crt.26901c12 ...
	I1206 10:16:51.909499  326059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.crt.26901c12: {Name:mkc64b6ce8a58d31bf6a1b5b42b3ec4f5828256f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:16:51.909679  326059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.key.26901c12 ...
	I1206 10:16:51.909694  326059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.key.26901c12: {Name:mk7be53990d5e76f0d46fb71d3b7852f795919b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:16:51.909776  326059 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.crt.26901c12 -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.crt
	I1206 10:16:51.909860  326059 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.key.26901c12 -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.key
	I1206 10:16:51.909921  326059 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/proxy-client.key
	I1206 10:16:51.909934  326059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/proxy-client.crt with IP's: []
	I1206 10:16:52.058938  326059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/proxy-client.crt ...
	I1206 10:16:52.058971  326059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/proxy-client.crt: {Name:mk37452beae525b233a8911cf8453f07532af0fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:16:52.059145  326059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/proxy-client.key ...
	I1206 10:16:52.059159  326059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/proxy-client.key: {Name:mka96cf644f791bb77cbfa5e2c27677794829747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:16:52.059401  326059 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:16:52.059450  326059 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:16:52.059459  326059 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:16:52.059487  326059 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:16:52.059512  326059 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:16:52.059537  326059 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:16:52.059582  326059 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:16:52.060240  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:16:52.079892  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:16:52.100424  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:16:52.120322  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:16:52.139154  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 10:16:52.158333  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:16:52.176940  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:16:52.195024  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:16:52.213105  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:16:52.233247  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:16:52.254210  326059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:16:52.274475  326059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:16:52.289366  326059 ssh_runner.go:195] Run: openssl version
	I1206 10:16:52.296131  326059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:16:52.304435  326059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:16:52.312459  326059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:16:52.316455  326059 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:16:52.316519  326059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:16:52.357788  326059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:16:52.365349  326059 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 10:16:52.374535  326059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:16:52.382134  326059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:16:52.389794  326059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:16:52.393814  326059 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:16:52.393883  326059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:16:52.435230  326059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:16:52.442766  326059 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 10:16:52.450245  326059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:16:52.457917  326059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:16:52.465798  326059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:16:52.469775  326059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:16:52.469892  326059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:16:52.513603  326059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:16:52.521509  326059 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:16:52.529134  326059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:16:52.533228  326059 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 10:16:52.533301  326059 kubeadm.go:401] StartCluster: {Name:calico-793086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-793086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:16:52.533377  326059 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:16:52.533443  326059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:16:52.562360  326059 cri.go:89] found id: ""
	I1206 10:16:52.562430  326059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:16:52.570404  326059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:16:52.578509  326059 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:16:52.578605  326059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:16:52.586498  326059 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:16:52.586560  326059 kubeadm.go:158] found existing configuration files:
	
	I1206 10:16:52.586628  326059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:16:52.594883  326059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:16:52.594965  326059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:16:52.602472  326059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:16:52.610678  326059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:16:52.610770  326059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:16:52.618703  326059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:16:52.626533  326059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:16:52.626598  326059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:16:52.633882  326059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:16:52.641692  326059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:16:52.641761  326059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:16:52.649086  326059 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:16:52.690504  326059 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 10:16:52.690568  326059 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:16:52.723955  326059 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:16:52.724053  326059 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:16:52.724100  326059 kubeadm.go:319] OS: Linux
	I1206 10:16:52.724168  326059 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:16:52.724230  326059 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:16:52.724311  326059 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:16:52.724367  326059 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:16:52.724439  326059 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:16:52.724501  326059 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:16:52.724574  326059 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:16:52.724634  326059 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:16:52.724706  326059 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:16:52.809433  326059 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:16:52.809557  326059 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:16:52.809652  326059 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:16:52.815917  326059 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:16:52.822742  326059 out.go:252]   - Generating certificates and keys ...
	I1206 10:16:52.822930  326059 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:16:52.823035  326059 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:16:53.390312  326059 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 10:16:55.473354  326059 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 10:16:55.539121  326059 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 10:16:55.851891  326059 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 10:16:56.429794  326059 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 10:16:56.430179  326059 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-793086 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 10:16:56.669986  326059 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 10:16:56.670424  326059 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-793086 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 10:16:57.036672  326059 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 10:16:57.288613  326059 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 10:16:57.683880  326059 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 10:16:57.687741  326059 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:16:58.133232  326059 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:16:58.435301  326059 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:16:59.172367  326059 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:16:59.415659  326059 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:17:00.063760  326059 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:17:00.063863  326059 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:17:00.101034  326059 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:17:00.104863  326059 out.go:252]   - Booting up control plane ...
	I1206 10:17:00.104983  326059 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:17:00.105061  326059 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:17:00.105130  326059 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:17:00.125900  326059 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:17:00.126564  326059 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:17:00.138507  326059 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:17:00.139527  326059 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:17:00.140073  326059 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:17:00.411370  326059 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:17:00.411537  326059 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:17:00.912990  326059 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.944537ms
	I1206 10:17:00.920550  326059 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 10:17:00.920911  326059 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1206 10:17:00.921010  326059 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 10:17:00.921097  326059 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 10:17:05.397263  326059 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.475829081s
	I1206 10:17:06.755538  326059 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.83454793s
	I1206 10:17:08.424327  326059 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.50314624s
	I1206 10:17:08.459953  326059 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 10:17:08.476408  326059 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 10:17:08.492299  326059 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 10:17:08.492513  326059 kubeadm.go:319] [mark-control-plane] Marking the node calico-793086 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 10:17:08.505445  326059 kubeadm.go:319] [bootstrap-token] Using token: 11h78t.cowjljjl30xhmp35
	I1206 10:17:08.508416  326059 out.go:252]   - Configuring RBAC rules ...
	I1206 10:17:08.508544  326059 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 10:17:08.513645  326059 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 10:17:08.522854  326059 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 10:17:08.527266  326059 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 10:17:08.533930  326059 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 10:17:08.538789  326059 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 10:17:08.832251  326059 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 10:17:09.272193  326059 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 10:17:09.832091  326059 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 10:17:09.833426  326059 kubeadm.go:319] 
	I1206 10:17:09.833520  326059 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 10:17:09.833533  326059 kubeadm.go:319] 
	I1206 10:17:09.833618  326059 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 10:17:09.833623  326059 kubeadm.go:319] 
	I1206 10:17:09.833648  326059 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 10:17:09.833707  326059 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 10:17:09.833757  326059 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 10:17:09.833762  326059 kubeadm.go:319] 
	I1206 10:17:09.833815  326059 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 10:17:09.833819  326059 kubeadm.go:319] 
	I1206 10:17:09.833866  326059 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 10:17:09.833869  326059 kubeadm.go:319] 
	I1206 10:17:09.833921  326059 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 10:17:09.833995  326059 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 10:17:09.834063  326059 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 10:17:09.834067  326059 kubeadm.go:319] 
	I1206 10:17:09.834151  326059 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 10:17:09.834228  326059 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 10:17:09.834238  326059 kubeadm.go:319] 
	I1206 10:17:09.834322  326059 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 11h78t.cowjljjl30xhmp35 \
	I1206 10:17:09.834425  326059 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9a3c0c9c90ab0f4223eda0e86927c77df6eeb83b3aa042bddb38493c60751529 \
	I1206 10:17:09.834445  326059 kubeadm.go:319] 	--control-plane 
	I1206 10:17:09.834448  326059 kubeadm.go:319] 
	I1206 10:17:09.834533  326059 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 10:17:09.834537  326059 kubeadm.go:319] 
	I1206 10:17:09.834619  326059 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 11h78t.cowjljjl30xhmp35 \
	I1206 10:17:09.834720  326059 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9a3c0c9c90ab0f4223eda0e86927c77df6eeb83b3aa042bddb38493c60751529 
	I1206 10:17:09.838632  326059 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1206 10:17:09.838872  326059 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:17:09.838981  326059 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:17:09.839016  326059 cni.go:84] Creating CNI manager for "calico"
	I1206 10:17:09.842223  326059 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1206 10:17:09.845364  326059 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 10:17:09.845396  326059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1206 10:17:09.861669  326059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 10:17:11.606278  326059 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.744569558s)
	I1206 10:17:11.606313  326059 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 10:17:11.606432  326059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:17:11.606500  326059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-793086 minikube.k8s.io/updated_at=2025_12_06T10_17_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=calico-793086 minikube.k8s.io/primary=true
	I1206 10:17:11.764431  326059 ops.go:34] apiserver oom_adj: -16
	I1206 10:17:11.764546  326059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:17:12.265353  326059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:17:12.765610  326059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:17:13.264663  326059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:17:13.764594  326059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:17:14.264685  326059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:17:14.428919  326059 kubeadm.go:1114] duration metric: took 2.822529995s to wait for elevateKubeSystemPrivileges
	I1206 10:17:14.428956  326059 kubeadm.go:403] duration metric: took 21.895665533s to StartCluster
	I1206 10:17:14.428977  326059 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:17:14.429039  326059 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:17:14.429984  326059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:17:14.430201  326059 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:17:14.430295  326059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 10:17:14.430548  326059 config.go:182] Loaded profile config "calico-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:17:14.430589  326059 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:17:14.430664  326059 addons.go:70] Setting storage-provisioner=true in profile "calico-793086"
	I1206 10:17:14.430686  326059 addons.go:239] Setting addon storage-provisioner=true in "calico-793086"
	I1206 10:17:14.430709  326059 host.go:66] Checking if "calico-793086" exists ...
	I1206 10:17:14.431237  326059 cli_runner.go:164] Run: docker container inspect calico-793086 --format={{.State.Status}}
	I1206 10:17:14.431829  326059 addons.go:70] Setting default-storageclass=true in profile "calico-793086"
	I1206 10:17:14.431851  326059 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-793086"
	I1206 10:17:14.432146  326059 cli_runner.go:164] Run: docker container inspect calico-793086 --format={{.State.Status}}
	I1206 10:17:14.438905  326059 out.go:179] * Verifying Kubernetes components...
	I1206 10:17:14.443044  326059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:17:14.482040  326059 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:17:14.484260  326059 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:17:14.484283  326059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:17:14.484346  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:17:14.494915  326059 addons.go:239] Setting addon default-storageclass=true in "calico-793086"
	I1206 10:17:14.494955  326059 host.go:66] Checking if "calico-793086" exists ...
	I1206 10:17:14.495467  326059 cli_runner.go:164] Run: docker container inspect calico-793086 --format={{.State.Status}}
	I1206 10:17:14.526997  326059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/calico-793086/id_rsa Username:docker}
	I1206 10:17:14.538761  326059 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:17:14.538782  326059 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:17:14.538889  326059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-793086
	I1206 10:17:14.563693  326059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/calico-793086/id_rsa Username:docker}
	I1206 10:17:14.888337  326059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 10:17:14.894553  326059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:17:14.965882  326059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:17:14.988417  326059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:17:15.610881  326059 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1206 10:17:15.613024  326059 node_ready.go:35] waiting up to 15m0s for node "calico-793086" to be "Ready" ...
	I1206 10:17:15.973089  326059 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 10:17:15.976714  326059 addons.go:530] duration metric: took 1.546112023s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 10:17:16.115599  326059 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-793086" context rescaled to 1 replicas
	W1206 10:17:17.616504  326059 node_ready.go:57] node "calico-793086" has "Ready":"False" status (will retry)
	W1206 10:17:19.623947  326059 node_ready.go:57] node "calico-793086" has "Ready":"False" status (will retry)
	I1206 10:17:20.616895  326059 node_ready.go:49] node "calico-793086" is "Ready"
	I1206 10:17:20.616931  326059 node_ready.go:38] duration metric: took 5.00387406s for node "calico-793086" to be "Ready" ...
	I1206 10:17:20.616955  326059 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:17:20.617021  326059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:17:20.644670  326059 api_server.go:72] duration metric: took 6.214433475s to wait for apiserver process to appear ...
	I1206 10:17:20.644697  326059 api_server.go:88] waiting for apiserver healthz status ...
	I1206 10:17:20.644716  326059 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 10:17:20.653061  326059 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 10:17:20.654196  326059 api_server.go:141] control plane version: v1.34.2
	I1206 10:17:20.654221  326059 api_server.go:131] duration metric: took 9.516212ms to wait for apiserver health ...
	I1206 10:17:20.654230  326059 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 10:17:20.659540  326059 system_pods.go:59] 9 kube-system pods found
	I1206 10:17:20.659578  326059 system_pods.go:61] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:20.659589  326059 system_pods.go:61] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:20.659598  326059 system_pods.go:61] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:20.659603  326059 system_pods.go:61] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:20.659609  326059 system_pods.go:61] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:20.659613  326059 system_pods.go:61] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:20.659616  326059 system_pods.go:61] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:20.659620  326059 system_pods.go:61] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:20.659625  326059 system_pods.go:61] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:17:20.659631  326059 system_pods.go:74] duration metric: took 5.395658ms to wait for pod list to return data ...
	I1206 10:17:20.659639  326059 default_sa.go:34] waiting for default service account to be created ...
	I1206 10:17:20.662712  326059 default_sa.go:45] found service account: "default"
	I1206 10:17:20.662810  326059 default_sa.go:55] duration metric: took 3.163507ms for default service account to be created ...
	I1206 10:17:20.662850  326059 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 10:17:20.667744  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:20.667828  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:20.667854  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:20.667899  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:20.667923  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:20.667943  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:20.667977  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:20.667998  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:20.668015  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:20.668043  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:17:20.668094  326059 retry.go:31] will retry after 215.772308ms: missing components: kube-dns
	I1206 10:17:20.890552  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:20.890656  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:20.890704  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:20.890731  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:20.890758  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:20.890790  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:20.890813  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:20.890832  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:20.890873  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:20.890896  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:17:20.890924  326059 retry.go:31] will retry after 295.339393ms: missing components: kube-dns
	I1206 10:17:21.191334  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:21.191412  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:21.191427  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:21.191443  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:21.191453  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:21.191463  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:21.191478  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:21.191484  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:21.191488  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:21.191494  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:17:21.191533  326059 retry.go:31] will retry after 425.735553ms: missing components: kube-dns
	I1206 10:17:21.622600  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:21.622637  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:21.622653  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:21.622664  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:21.622671  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:21.622681  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:21.622690  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:21.622695  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:21.622700  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:21.622708  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:21.622722  326059 retry.go:31] will retry after 424.813239ms: missing components: kube-dns
	I1206 10:17:22.051184  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:22.051227  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:22.051241  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:22.051250  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:22.051293  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:22.051299  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:22.051305  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:22.051316  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:22.051321  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:22.051327  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:22.051350  326059 retry.go:31] will retry after 571.097309ms: missing components: kube-dns
	I1206 10:17:22.631284  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:22.631324  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:22.631334  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:22.631342  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:22.631347  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:22.631352  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:22.631357  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:22.631361  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:22.631365  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:22.631370  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:22.631432  326059 retry.go:31] will retry after 838.29782ms: missing components: kube-dns
	I1206 10:17:23.477241  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:23.477277  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:23.477289  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:23.477296  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:23.477302  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:23.477308  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:23.477312  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:23.477326  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:23.477330  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:23.477334  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:23.477351  326059 retry.go:31] will retry after 910.542865ms: missing components: kube-dns
	I1206 10:17:24.391991  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:24.392030  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:24.392039  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:24.392049  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:24.392054  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:24.392060  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:24.392066  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:24.392070  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:24.392074  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:24.392078  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:24.392091  326059 retry.go:31] will retry after 1.312453989s: missing components: kube-dns
	I1206 10:17:25.709561  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:25.709685  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:25.709721  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:25.709746  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:25.709754  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:25.709769  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:25.709773  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:25.709778  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:25.709789  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:25.709795  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:25.709830  326059 retry.go:31] will retry after 1.795950145s: missing components: kube-dns
	I1206 10:17:27.509371  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:27.509415  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:27.509426  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:27.509433  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:27.509440  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:27.509445  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:27.509450  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:27.509455  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:27.509459  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:27.509463  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:27.509482  326059 retry.go:31] will retry after 1.47204748s: missing components: kube-dns
	I1206 10:17:28.987400  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:28.987432  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:28.987444  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:28.987476  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:28.987489  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:28.987495  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:28.987500  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:28.987510  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:28.987515  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:28.987519  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:28.987539  326059 retry.go:31] will retry after 2.58960962s: missing components: kube-dns
	I1206 10:17:31.581157  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:31.581195  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:31.581207  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:31.581216  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:31.581220  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:31.581225  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:31.581229  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:31.581234  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:31.581238  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:31.581241  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:31.581263  326059 retry.go:31] will retry after 2.771020451s: missing components: kube-dns
	I1206 10:17:34.356934  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:34.356978  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1206 10:17:34.356987  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:34.356998  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:17:34.357002  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:34.357008  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:34.357013  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:34.357018  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:34.357022  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:34.357027  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:34.357048  326059 retry.go:31] will retry after 3.512692309s: missing components: kube-dns
	I1206 10:17:37.876667  326059 system_pods.go:86] 9 kube-system pods found
	I1206 10:17:37.876706  326059 system_pods.go:89] "calico-kube-controllers-5c676f698c-9lb7p" [67442d2a-acfc-47df-9c69-8a492e7ebed5] Running
	I1206 10:17:37.876716  326059 system_pods.go:89] "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1206 10:17:37.876723  326059 system_pods.go:89] "coredns-66bc5c9577-gxk8d" [fb975cd9-8f3b-459c-8e40-556f31a206a4] Running
	I1206 10:17:37.876729  326059 system_pods.go:89] "etcd-calico-793086" [9ccf14d5-770f-48f4-92f5-9234f52df4d8] Running
	I1206 10:17:37.876733  326059 system_pods.go:89] "kube-apiserver-calico-793086" [df86d2f4-a789-470a-aad7-f4018587c1c3] Running
	I1206 10:17:37.876738  326059 system_pods.go:89] "kube-controller-manager-calico-793086" [7e26c6e1-ce11-4fcc-ab4b-edd7c277aacb] Running
	I1206 10:17:37.876742  326059 system_pods.go:89] "kube-proxy-kjs6f" [944d418f-7a11-4595-b1c5-7b0ddd9a27fe] Running
	I1206 10:17:37.876747  326059 system_pods.go:89] "kube-scheduler-calico-793086" [eb381bd0-03f3-4930-b42a-e850b888c732] Running
	I1206 10:17:37.876751  326059 system_pods.go:89] "storage-provisioner" [ef9b1095-cd38-4a3a-98ec-0b7655252ae4] Running
	I1206 10:17:37.876758  326059 system_pods.go:126] duration metric: took 17.213870783s to wait for k8s-apps to be running ...
	I1206 10:17:37.876772  326059 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 10:17:37.876835  326059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:17:37.904305  326059 system_svc.go:56] duration metric: took 27.525479ms WaitForService to wait for kubelet
	I1206 10:17:37.904332  326059 kubeadm.go:587] duration metric: took 23.474101891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:17:37.904350  326059 node_conditions.go:102] verifying NodePressure condition ...
	I1206 10:17:37.907423  326059 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 10:17:37.907461  326059 node_conditions.go:123] node cpu capacity is 2
	I1206 10:17:37.907478  326059 node_conditions.go:105] duration metric: took 3.121463ms to run NodePressure ...
	I1206 10:17:37.907492  326059 start.go:242] waiting for startup goroutines ...
	I1206 10:17:37.907500  326059 start.go:247] waiting for cluster config update ...
	I1206 10:17:37.907517  326059 start.go:256] writing updated cluster config ...
	I1206 10:17:37.907819  326059 ssh_runner.go:195] Run: rm -f paused
	I1206 10:17:37.912978  326059 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:17:37.917712  326059 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gxk8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:37.922941  326059 pod_ready.go:94] pod "coredns-66bc5c9577-gxk8d" is "Ready"
	I1206 10:17:37.922969  326059 pod_ready.go:86] duration metric: took 5.224399ms for pod "coredns-66bc5c9577-gxk8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:37.925742  326059 pod_ready.go:83] waiting for pod "etcd-calico-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:37.931143  326059 pod_ready.go:94] pod "etcd-calico-793086" is "Ready"
	I1206 10:17:37.931186  326059 pod_ready.go:86] duration metric: took 5.41407ms for pod "etcd-calico-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:37.934038  326059 pod_ready.go:83] waiting for pod "kube-apiserver-calico-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:37.939228  326059 pod_ready.go:94] pod "kube-apiserver-calico-793086" is "Ready"
	I1206 10:17:37.939257  326059 pod_ready.go:86] duration metric: took 5.191726ms for pod "kube-apiserver-calico-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:37.942010  326059 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:38.317720  326059 pod_ready.go:94] pod "kube-controller-manager-calico-793086" is "Ready"
	I1206 10:17:38.317751  326059 pod_ready.go:86] duration metric: took 375.7147ms for pod "kube-controller-manager-calico-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:38.518837  326059 pod_ready.go:83] waiting for pod "kube-proxy-kjs6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:38.917710  326059 pod_ready.go:94] pod "kube-proxy-kjs6f" is "Ready"
	I1206 10:17:38.917740  326059 pod_ready.go:86] duration metric: took 398.877063ms for pod "kube-proxy-kjs6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:39.118519  326059 pod_ready.go:83] waiting for pod "kube-scheduler-calico-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:39.519079  326059 pod_ready.go:94] pod "kube-scheduler-calico-793086" is "Ready"
	I1206 10:17:39.519105  326059 pod_ready.go:86] duration metric: took 400.557259ms for pod "kube-scheduler-calico-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:17:39.519117  326059 pod_ready.go:40] duration metric: took 1.606095834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:17:39.598540  326059 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 10:17:39.602787  326059 out.go:179] * Done! kubectl is now configured to use "calico-793086" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580837118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580855563Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580885274Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580900995Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580911087Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580921853Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580931149Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580945311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580962436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580998678Z" level=info msg="Connect containerd service"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.581274307Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.581881961Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598029541Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598099063Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598123851Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598177546Z" level=info msg="Start recovering state"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621091913Z" level=info msg="Start event monitor"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621277351Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621341351Z" level=info msg="Start streaming server"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621405639Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621464397Z" level=info msg="runtime interface starting up..."
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621515523Z" level=info msg="starting plugins..."
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621595705Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:02:56 no-preload-257359 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.623695007Z" level=info msg="containerd successfully booted in 0.067121s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:18:03.147142    8180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:18:03.148102    8180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:18:03.148958    8180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:18:03.150752    8180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:18:03.151796    8180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:18:03 up  2:00,  0 user,  load average: 1.75, 1.35, 1.37
	Linux no-preload-257359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:17:59 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:18:00 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 06 10:18:00 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:18:00 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:18:00 no-preload-257359 kubelet[8048]: E1206 10:18:00.527616    8048 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:18:00 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:18:00 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:18:01 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 06 10:18:01 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:18:01 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:18:01 no-preload-257359 kubelet[8054]: E1206 10:18:01.312889    8054 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:18:01 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:18:01 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:18:01 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 06 10:18:01 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:18:01 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:18:02 no-preload-257359 kubelet[8076]: E1206 10:18:02.064343    8076 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:18:02 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:18:02 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:18:02 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 06 10:18:02 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:18:02 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:18:02 no-preload-257359 kubelet[8123]: E1206 10:18:02.822883    8123 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:18:02 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:18:02 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359: exit status 2 (520.659981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-257359" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-387337 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (321.650602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-387337 -n newest-cni-387337
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (352.966938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-387337 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (308.876767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-387337 -n newest-cni-387337
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (322.807641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-387337
helpers_test.go:243: (dbg) docker inspect newest-cni-387337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	        "Created": "2025-12-06T09:56:17.358293629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:06:25.490985794Z",
	            "FinishedAt": "2025-12-06T10:06:24.07452303Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hosts",
	        "LogPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9-json.log",
	        "Name": "/newest-cni-387337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-387337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-387337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	                "LowerDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-387337",
	                "Source": "/var/lib/docker/volumes/newest-cni-387337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-387337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-387337",
	                "name.minikube.sigs.k8s.io": "newest-cni-387337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0237cbac4089b5971baf99dcc5f5da9d321416f1c02aecd4eecab8f5eca5da8a",
	            "SandboxKey": "/var/run/docker/netns/0237cbac4089",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-387337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:c0:9f:b1:4f:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f42a70d42248e7fb537c8957fc3c9ad0a04046b4da244cdde31b86ebc56a160b",
	                    "EndpointID": "315fc1e3324af45e0df5a53d34bf5d6797d7154b55022bdff9ab7809e194b0cf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-387337",
	                        "e89a14c7a996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (368.178364ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-387337 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-387337 logs -n 25: (1.680736227s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-257359 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-257359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-387337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:04 UTC │                     │
	│ stop    │ -p newest-cni-387337 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │ 06 Dec 25 10:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-387337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │ 06 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │                     │
	│ image   │ newest-cni-387337 image list --format=json                                                                                                                                                                                                                 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ pause   │ -p newest-cni-387337 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ unpause │ -p newest-cni-387337 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:06:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:06:25.195145  293728 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:06:25.195325  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195335  293728 out.go:374] Setting ErrFile to fd 2...
	I1206 10:06:25.195341  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195634  293728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:06:25.196028  293728 out.go:368] Setting JSON to false
	I1206 10:06:25.196926  293728 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6537,"bootTime":1765009049,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:06:25.196997  293728 start.go:143] virtualization:  
	I1206 10:06:25.199959  293728 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:06:25.203880  293728 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:06:25.204017  293728 notify.go:221] Checking for updates...
	I1206 10:06:25.210368  293728 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:06:25.213374  293728 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:25.216371  293728 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:06:25.221036  293728 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:06:25.223973  293728 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:06:25.227572  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:25.228243  293728 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:06:25.261513  293728 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:06:25.261626  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.340601  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.331029372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.340708  293728 docker.go:319] overlay module found
	I1206 10:06:25.343872  293728 out.go:179] * Using the docker driver based on existing profile
	I1206 10:06:25.346835  293728 start.go:309] selected driver: docker
	I1206 10:06:25.346867  293728 start.go:927] validating driver "docker" against &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.346969  293728 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:06:25.347911  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.407260  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.398348793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.407652  293728 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 10:06:25.407684  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:25.407750  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:25.407788  293728 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.410983  293728 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 10:06:25.413800  293728 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:06:25.416704  293728 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:06:25.419472  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:25.419517  293728 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:06:25.419530  293728 cache.go:65] Caching tarball of preloaded images
	I1206 10:06:25.419542  293728 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:06:25.419614  293728 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:06:25.419624  293728 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:06:25.419745  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.439065  293728 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:06:25.439097  293728 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:06:25.439117  293728 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:06:25.439151  293728 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:06:25.439218  293728 start.go:364] duration metric: took 44.948µs to acquireMachinesLock for "newest-cni-387337"
	I1206 10:06:25.439242  293728 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:06:25.439250  293728 fix.go:54] fixHost starting: 
	I1206 10:06:25.439553  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.455936  293728 fix.go:112] recreateIfNeeded on newest-cni-387337: state=Stopped err=<nil>
	W1206 10:06:25.455970  293728 fix.go:138] unexpected machine state, will restart: <nil>
	W1206 10:06:22.222571  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:24.223444  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:25.459174  293728 out.go:252] * Restarting existing docker container for "newest-cni-387337" ...
	I1206 10:06:25.459260  293728 cli_runner.go:164] Run: docker start newest-cni-387337
	I1206 10:06:25.713574  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.738668  293728 kic.go:430] container "newest-cni-387337" state is running.
	I1206 10:06:25.739140  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:25.765706  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.766035  293728 machine.go:94] provisionDockerMachine start ...
	I1206 10:06:25.766147  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:25.787280  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:25.787973  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:25.787996  293728 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:06:25.789031  293728 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:06:28.943483  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:28.943510  293728 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 10:06:28.943583  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:28.962379  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:28.962708  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:28.962726  293728 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 10:06:29.136463  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:29.136552  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.155008  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:29.155343  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:29.155363  293728 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:06:29.311555  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:06:29.311646  293728 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:06:29.311703  293728 ubuntu.go:190] setting up certificates
	I1206 10:06:29.311733  293728 provision.go:84] configureAuth start
	I1206 10:06:29.311826  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.328361  293728 provision.go:143] copyHostCerts
	I1206 10:06:29.328435  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:06:29.328455  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:06:29.328532  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:06:29.328644  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:06:29.328655  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:06:29.328683  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:06:29.328754  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:06:29.328763  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:06:29.328788  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:06:29.328850  293728 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 10:06:29.477422  293728 provision.go:177] copyRemoteCerts
	I1206 10:06:29.477497  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:06:29.477551  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.495349  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.603554  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:06:29.622338  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:06:29.641011  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:06:29.660417  293728 provision.go:87] duration metric: took 348.656521ms to configureAuth
	I1206 10:06:29.660488  293728 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:06:29.660700  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:29.660714  293728 machine.go:97] duration metric: took 3.894659315s to provisionDockerMachine
	I1206 10:06:29.660722  293728 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 10:06:29.660734  293728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:06:29.660787  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:06:29.660840  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.679336  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.792654  293728 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:06:29.796414  293728 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:06:29.796451  293728 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:06:29.796481  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:06:29.796555  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:06:29.796637  293728 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:06:29.796752  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:06:29.804466  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:29.822913  293728 start.go:296] duration metric: took 162.176035ms for postStartSetup
	I1206 10:06:29.822993  293728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:06:29.823033  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.841962  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.944706  293728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:06:29.949621  293728 fix.go:56] duration metric: took 4.510364001s for fixHost
	I1206 10:06:29.949690  293728 start.go:83] releasing machines lock for "newest-cni-387337", held for 4.510458303s
	I1206 10:06:29.949801  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.966982  293728 ssh_runner.go:195] Run: cat /version.json
	I1206 10:06:29.967044  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.967315  293728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:06:29.967425  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.989346  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.995399  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:30.108934  293728 ssh_runner.go:195] Run: systemctl --version
	W1206 10:06:26.722852  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:29.222555  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:30.251570  293728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:06:30.256600  293728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:06:30.256686  293728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:06:30.265366  293728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:06:30.265436  293728 start.go:496] detecting cgroup driver to use...
	I1206 10:06:30.265475  293728 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:06:30.265547  293728 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:06:30.285393  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:06:30.300014  293728 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:06:30.300101  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:06:30.316388  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:06:30.330703  293728 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:06:30.447811  293728 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:06:30.578928  293728 docker.go:234] disabling docker service ...
	I1206 10:06:30.579012  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:06:30.595245  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:06:30.608936  293728 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:06:30.732584  293728 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:06:30.854426  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:06:30.867755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:06:30.882294  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:06:30.891997  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:06:30.901695  293728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:06:30.901766  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:06:30.911307  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.920864  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:06:30.930280  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.939955  293728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:06:30.948517  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:06:30.957894  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:06:30.967715  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:06:30.977793  293728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:06:30.985557  293728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:06:30.993239  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.114748  293728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:06:31.239476  293728 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:06:31.239597  293728 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:06:31.244664  293728 start.go:564] Will wait 60s for crictl version
	I1206 10:06:31.244770  293728 ssh_runner.go:195] Run: which crictl
	I1206 10:06:31.249231  293728 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:06:31.276528  293728 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:06:31.276637  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.298790  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.323558  293728 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:06:31.326534  293728 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:06:31.343556  293728 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 10:06:31.347752  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.361512  293728 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 10:06:31.364437  293728 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:06:31.364599  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:31.364692  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.390507  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.390542  293728 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:06:31.390602  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.417903  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.417928  293728 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:06:31.417937  293728 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 10:06:31.418044  293728 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:06:31.418117  293728 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:06:31.443849  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:31.443876  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:31.443900  293728 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 10:06:31.443924  293728 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:06:31.444044  293728 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:06:31.444118  293728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:06:31.452187  293728 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:06:31.452301  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:06:31.460150  293728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:06:31.473854  293728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:06:31.487946  293728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 10:06:31.501615  293728 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:06:31.505530  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.516062  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.633832  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:31.655929  293728 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 10:06:31.655955  293728 certs.go:195] generating shared ca certs ...
	I1206 10:06:31.655972  293728 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:31.656127  293728 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:06:31.656182  293728 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:06:31.656198  293728 certs.go:257] generating profile certs ...
	I1206 10:06:31.656306  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 10:06:31.656372  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 10:06:31.656419  293728 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 10:06:31.656536  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:06:31.656576  293728 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:06:31.656590  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:06:31.656620  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:06:31.656647  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:06:31.656675  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:06:31.656737  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:31.657407  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:06:31.678086  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:06:31.699851  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:06:31.722100  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:06:31.743193  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:06:31.762896  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:06:31.781616  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:06:31.801280  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:06:31.819401  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:06:31.838552  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:06:31.856936  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:06:31.875547  293728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:06:31.888930  293728 ssh_runner.go:195] Run: openssl version
	I1206 10:06:31.895342  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.903529  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:06:31.911304  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915287  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915352  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.961696  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:06:31.970315  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.981710  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:06:31.992227  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996668  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996744  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:06:32.043296  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:06:32.051139  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.058979  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:06:32.066993  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071120  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071217  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.113955  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:06:32.121998  293728 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:06:32.126168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:06:32.167933  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:06:32.209594  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:06:32.252826  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:06:32.295168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:06:32.336384  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:06:32.377923  293728 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:32.378019  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:06:32.378107  293728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:06:32.406152  293728 cri.go:89] found id: ""
	I1206 10:06:32.406224  293728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:06:32.414373  293728 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:06:32.414394  293728 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:06:32.414444  293728 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:06:32.422214  293728 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:06:32.422855  293728 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-387337" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.423179  293728 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-387337" cluster setting kubeconfig missing "newest-cni-387337" context setting]
	I1206 10:06:32.423737  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.425135  293728 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:06:32.433653  293728 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 10:06:32.433689  293728 kubeadm.go:602] duration metric: took 19.289872ms to restartPrimaryControlPlane
	I1206 10:06:32.433699  293728 kubeadm.go:403] duration metric: took 55.791147ms to StartCluster
	I1206 10:06:32.433714  293728 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.433786  293728 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.434769  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.434995  293728 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:06:32.435318  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:32.435370  293728 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:06:32.435471  293728 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-387337"
	I1206 10:06:32.435485  293728 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-387337"
	I1206 10:06:32.435510  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435575  293728 addons.go:70] Setting dashboard=true in profile "newest-cni-387337"
	I1206 10:06:32.435608  293728 addons.go:239] Setting addon dashboard=true in "newest-cni-387337"
	W1206 10:06:32.435630  293728 addons.go:248] addon dashboard should already be in state true
	I1206 10:06:32.435689  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435986  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436310  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436715  293728 addons.go:70] Setting default-storageclass=true in profile "newest-cni-387337"
	I1206 10:06:32.436742  293728 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-387337"
	I1206 10:06:32.437054  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.440794  293728 out.go:179] * Verifying Kubernetes components...
	I1206 10:06:32.443631  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:32.498221  293728 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 10:06:32.501060  293728 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 10:06:32.503631  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 10:06:32.503654  293728 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 10:06:32.503744  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.508648  293728 addons.go:239] Setting addon default-storageclass=true in "newest-cni-387337"
	I1206 10:06:32.508690  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.509493  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.523049  293728 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:06:32.526921  293728 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.526947  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:06:32.527022  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.570818  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.571691  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.595638  293728 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.595658  293728 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:06:32.595716  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.624247  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.694342  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:32.746370  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 10:06:32.746390  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 10:06:32.765644  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.786998  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 10:06:32.787020  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 10:06:32.804870  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.820938  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 10:06:32.821012  293728 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 10:06:32.877095  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 10:06:32.877165  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 10:06:32.903565  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 10:06:32.903593  293728 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 10:06:32.916625  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 10:06:32.916699  293728 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 10:06:32.930049  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 10:06:32.930072  293728 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 10:06:32.943222  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 10:06:32.943248  293728 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 10:06:32.958124  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:32.958148  293728 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 10:06:32.971454  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:33.482958  293728 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:06:33.483036  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:33.483155  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483183  293728 retry.go:31] will retry after 318.519734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483231  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483244  293728 retry.go:31] will retry after 239.813026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483501  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483518  293728 retry.go:31] will retry after 128.431008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.612510  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:33.679631  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.679670  293728 retry.go:31] will retry after 494.781452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.723639  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:33.790368  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.790401  293728 retry.go:31] will retry after 373.145908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.802573  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:33.864526  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.864571  293728 retry.go:31] will retry after 555.783365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.983818  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.164188  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:34.174768  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:34.315072  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.315120  293728 retry.go:31] will retry after 679.653646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:34.319455  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.319548  293728 retry.go:31] will retry after 695.531102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.421513  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:34.483690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:34.487662  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.487697  293728 retry.go:31] will retry after 692.225187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.983561  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.995819  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:35.016010  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:35.122122  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.122225  293728 retry.go:31] will retry after 1.142566381s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.138887  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.138925  293728 retry.go:31] will retry after 649.678663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.180839  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:31.222846  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:33.722513  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:35.247363  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.247415  293728 retry.go:31] will retry after 580.881907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.483771  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:35.788736  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:35.829213  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:35.856520  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.856598  293728 retry.go:31] will retry after 1.553154314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.896812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.896844  293728 retry.go:31] will retry after 933.683215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.984035  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.265085  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:36.326884  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.326918  293728 retry.go:31] will retry after 708.086155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.484141  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.831542  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:36.897118  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.897156  293728 retry.go:31] will retry after 1.33074055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.983504  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.035538  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:37.096009  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.096042  293728 retry.go:31] will retry after 1.790090237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.410554  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:37.480541  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.480578  293728 retry.go:31] will retry after 966.279559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.483641  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.984118  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:38.228242  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:38.293907  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.293942  293728 retry.go:31] will retry after 2.616205885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.447170  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:38.483864  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:38.514147  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.514181  293728 retry.go:31] will retry after 2.714109668s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.886857  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:38.951997  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.952029  293728 retry.go:31] will retry after 2.462359856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.983614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.483264  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.983242  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:35.723224  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:38.222673  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:40.483248  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:40.910479  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:40.983819  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:40.985785  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:40.985821  293728 retry.go:31] will retry after 2.652074408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.229298  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:41.298980  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.299018  293728 retry.go:31] will retry after 3.795353676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.415143  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:41.478696  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.478758  293728 retry.go:31] will retry after 5.28721939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.483845  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:41.983945  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.483250  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.483241  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.638309  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:43.697835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.697874  293728 retry.go:31] will retry after 4.887793633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.983195  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.483546  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.983775  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.095370  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:45.192562  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:45.192602  293728 retry.go:31] will retry after 8.015655906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:40.722829  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:42.723326  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:45.223605  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:45.483497  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.984044  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.483220  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.766179  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:46.829923  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.829956  293728 retry.go:31] will retry after 4.667102636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.984011  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.483312  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.984058  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.586389  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:48.650814  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.650848  293728 retry.go:31] will retry after 13.339615646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.983299  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.483453  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.983414  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:47.722614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:50.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:50.483943  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:50.983588  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.497329  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:51.584226  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.584262  293728 retry.go:31] will retry after 10.765270657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.983783  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.484023  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.983169  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.208585  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:53.275063  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.275124  293728 retry.go:31] will retry after 12.265040886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.983886  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.483520  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.983246  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:52.722502  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:54.722548  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:55.484066  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:55.983753  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.483532  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.983522  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.483514  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.983263  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.483994  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.983173  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.483759  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.983187  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:56.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:58.723298  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:00.483755  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:00.984174  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.983995  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.991432  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:02.091463  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.091500  293728 retry.go:31] will retry after 13.890333948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.349878  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:02.411835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.411870  293728 retry.go:31] will retry after 7.977295138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.483150  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:02.983902  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.483778  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.983278  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.483894  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.983934  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:01.222997  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:03.722642  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:05.483794  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:05.540834  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:05.606800  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.606832  293728 retry.go:31] will retry after 11.29369971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.983418  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.983887  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.483439  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.984054  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.483236  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.983521  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.483231  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:06.222598  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:08.222649  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:10.390061  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:10.460795  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.460828  293728 retry.go:31] will retry after 24.523063216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.483989  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:10.983508  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.483968  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.983921  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.983503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.483736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.983533  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.483788  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.983198  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:10.722891  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:13.222531  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:15.223567  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:15.483180  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:15.982114  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:07:15.983591  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:16.054278  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.054318  293728 retry.go:31] will retry after 20.338606766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.484114  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:16.901533  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:07:16.984157  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:17.001960  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.001998  293728 retry.go:31] will retry after 24.827417164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.483261  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:17.983420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.983281  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.483741  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.983176  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:17.722636  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:20.222572  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:20.483695  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:20.983984  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.483862  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.483812  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.983632  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.483796  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.984175  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:22.222705  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:24.723752  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:25.483633  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:25.984006  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.483830  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.983203  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.483211  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.983237  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.484156  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.983736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.483880  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.984116  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:27.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:29.223485  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:30.483549  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:30.983243  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.483786  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.983608  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:32.483844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:32.483952  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:32.508469  293728 cri.go:89] found id: ""
	I1206 10:07:32.508497  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.508505  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:32.508512  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:32.508574  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:32.533265  293728 cri.go:89] found id: ""
	I1206 10:07:32.533288  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.533297  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:32.533303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:32.533364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:32.562655  293728 cri.go:89] found id: ""
	I1206 10:07:32.562686  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.562695  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:32.562702  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:32.562769  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:32.587755  293728 cri.go:89] found id: ""
	I1206 10:07:32.587781  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.587789  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:32.587796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:32.587855  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:32.613253  293728 cri.go:89] found id: ""
	I1206 10:07:32.613284  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.613292  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:32.613305  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:32.613364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:32.638621  293728 cri.go:89] found id: ""
	I1206 10:07:32.638648  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.638656  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:32.638662  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:32.638775  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:32.663624  293728 cri.go:89] found id: ""
	I1206 10:07:32.663649  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.663657  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:32.663664  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:32.663724  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:32.687850  293728 cri.go:89] found id: ""
	I1206 10:07:32.687872  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.687881  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:32.687890  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:32.687901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:32.763755  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:32.763831  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:32.788174  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:32.788242  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:32.866103  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:32.866126  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:32.866138  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:32.891711  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:32.891745  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:34.985041  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:35.094954  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:35.094988  293728 retry.go:31] will retry after 34.21540436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:07:31.722556  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:33.722685  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:35.421586  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:35.432096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:35.432164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:35.457419  293728 cri.go:89] found id: ""
	I1206 10:07:35.457442  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.457451  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:35.457457  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:35.457520  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:35.481490  293728 cri.go:89] found id: ""
	I1206 10:07:35.481513  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.481521  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:35.481527  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:35.481586  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:35.506409  293728 cri.go:89] found id: ""
	I1206 10:07:35.506432  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.506441  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:35.506447  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:35.506512  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:35.534896  293728 cri.go:89] found id: ""
	I1206 10:07:35.534923  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.534932  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:35.534939  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:35.534997  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:35.560020  293728 cri.go:89] found id: ""
	I1206 10:07:35.560043  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.560052  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:35.560058  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:35.560115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:35.584963  293728 cri.go:89] found id: ""
	I1206 10:07:35.585028  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.585042  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:35.585049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:35.585110  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:35.617464  293728 cri.go:89] found id: ""
	I1206 10:07:35.617487  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.617495  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:35.617501  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:35.617562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:35.642187  293728 cri.go:89] found id: ""
	I1206 10:07:35.642219  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.642228  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:35.642238  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:35.642250  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:35.655709  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:35.655738  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:35.728266  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:35.728336  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:35.728379  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:35.766222  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:35.766301  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:35.823000  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:35.823024  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:36.393185  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:36.458951  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:36.458990  293728 retry.go:31] will retry after 24.220809087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:38.379270  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:38.389923  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:38.389993  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:38.416450  293728 cri.go:89] found id: ""
	I1206 10:07:38.416517  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.416540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:38.416558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:38.416635  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:38.442635  293728 cri.go:89] found id: ""
	I1206 10:07:38.442663  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.442672  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:38.442680  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:38.442742  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:38.469797  293728 cri.go:89] found id: ""
	I1206 10:07:38.469824  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.469834  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:38.469840  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:38.469899  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:38.497073  293728 cri.go:89] found id: ""
	I1206 10:07:38.497098  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.497107  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:38.497113  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:38.497194  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:38.527432  293728 cri.go:89] found id: ""
	I1206 10:07:38.527465  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.527474  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:38.527481  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:38.527540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:38.554253  293728 cri.go:89] found id: ""
	I1206 10:07:38.554278  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.554290  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:38.554300  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:38.554368  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:38.580022  293728 cri.go:89] found id: ""
	I1206 10:07:38.580070  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.580080  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:38.580087  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:38.580165  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:38.604967  293728 cri.go:89] found id: ""
	I1206 10:07:38.604992  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.605001  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:38.605010  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:38.605041  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:38.672012  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:38.672044  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:38.672075  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:38.697533  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:38.697567  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:38.750151  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:38.750176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:38.835463  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:38.835500  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:07:35.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:38.222743  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:41.350690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:41.361865  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:41.361934  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:41.387755  293728 cri.go:89] found id: ""
	I1206 10:07:41.387781  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.387789  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:41.387796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:41.387854  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:41.412482  293728 cri.go:89] found id: ""
	I1206 10:07:41.412510  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.412519  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:41.412526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:41.412591  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:41.437604  293728 cri.go:89] found id: ""
	I1206 10:07:41.437635  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.437644  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:41.437650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:41.437722  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:41.462503  293728 cri.go:89] found id: ""
	I1206 10:07:41.462573  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.462597  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:41.462616  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:41.462703  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:41.487720  293728 cri.go:89] found id: ""
	I1206 10:07:41.487742  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.487750  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:41.487757  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:41.487819  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:41.513291  293728 cri.go:89] found id: ""
	I1206 10:07:41.513321  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.513332  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:41.513342  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:41.513420  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:41.547109  293728 cri.go:89] found id: ""
	I1206 10:07:41.547132  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.547141  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:41.547147  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:41.547209  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:41.572514  293728 cri.go:89] found id: ""
	I1206 10:07:41.572585  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.572607  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:41.572628  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:41.572669  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:41.629345  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:41.629378  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:41.643897  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:41.643928  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:41.713946  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:41.714006  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:41.714025  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:41.745589  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:41.745645  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:41.830134  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:41.893553  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:41.893593  293728 retry.go:31] will retry after 44.351115962s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:44.324517  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:44.335432  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:44.335507  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:44.365594  293728 cri.go:89] found id: ""
	I1206 10:07:44.365621  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.365630  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:44.365637  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:44.365723  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:44.390876  293728 cri.go:89] found id: ""
	I1206 10:07:44.390909  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.390919  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:44.390944  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:44.391026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:44.421424  293728 cri.go:89] found id: ""
	I1206 10:07:44.421448  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.421462  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:44.421468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:44.421525  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:44.445299  293728 cri.go:89] found id: ""
	I1206 10:07:44.445325  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.445335  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:44.445341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:44.445454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:44.473977  293728 cri.go:89] found id: ""
	I1206 10:07:44.473999  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.474008  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:44.474014  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:44.474072  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:44.501273  293728 cri.go:89] found id: ""
	I1206 10:07:44.501299  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.501308  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:44.501341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:44.501415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:44.525106  293728 cri.go:89] found id: ""
	I1206 10:07:44.525136  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.525154  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:44.525161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:44.525223  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:44.550546  293728 cri.go:89] found id: ""
	I1206 10:07:44.550571  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.550580  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:44.550589  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:44.550600  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:44.615941  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:44.615962  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:44.615975  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:44.641346  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:44.641377  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:44.669493  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:44.669520  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:44.727196  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:44.727357  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:07:40.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:43.222679  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:45.222775  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:47.260652  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:47.271164  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:47.271238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:47.295481  293728 cri.go:89] found id: ""
	I1206 10:07:47.295506  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.295515  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:47.295521  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:47.295581  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:47.321861  293728 cri.go:89] found id: ""
	I1206 10:07:47.321884  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.321892  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:47.321898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:47.321954  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:47.346071  293728 cri.go:89] found id: ""
	I1206 10:07:47.346094  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.346103  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:47.346110  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:47.346169  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:47.373210  293728 cri.go:89] found id: ""
	I1206 10:07:47.373234  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.373242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:47.373249  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:47.373312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:47.403706  293728 cri.go:89] found id: ""
	I1206 10:07:47.403729  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.403739  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:47.403745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:47.403810  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:47.433807  293728 cri.go:89] found id: ""
	I1206 10:07:47.433831  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.433840  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:47.433847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:47.433904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:47.462210  293728 cri.go:89] found id: ""
	I1206 10:07:47.462233  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.462241  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:47.462247  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:47.462308  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:47.486445  293728 cri.go:89] found id: ""
	I1206 10:07:47.486523  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.486546  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:47.486567  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:47.486597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:47.500083  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:47.500114  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:47.568637  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:47.568661  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:47.568683  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:47.598178  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:47.598213  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:47.629224  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:47.629249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.187574  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:47.727856  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:50.223331  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:50.198529  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:50.198609  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:50.224708  293728 cri.go:89] found id: ""
	I1206 10:07:50.224731  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.224738  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:50.224744  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:50.224806  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:50.253337  293728 cri.go:89] found id: ""
	I1206 10:07:50.253361  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.253370  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:50.253376  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:50.253433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:50.278723  293728 cri.go:89] found id: ""
	I1206 10:07:50.278750  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.278759  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:50.278766  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:50.278830  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:50.308736  293728 cri.go:89] found id: ""
	I1206 10:07:50.308803  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.308822  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:50.308834  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:50.308894  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:50.333136  293728 cri.go:89] found id: ""
	I1206 10:07:50.333162  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.333171  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:50.333177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:50.333263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:50.358071  293728 cri.go:89] found id: ""
	I1206 10:07:50.358105  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.358114  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:50.358137  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:50.358215  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:50.382078  293728 cri.go:89] found id: ""
	I1206 10:07:50.382111  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.382120  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:50.382141  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:50.382222  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:50.407225  293728 cri.go:89] found id: ""
	I1206 10:07:50.407261  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.407270  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:50.407279  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:50.407291  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.466553  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:50.466588  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:50.480420  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:50.480450  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:50.546503  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:50.546523  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:50.546546  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:50.573208  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:50.573243  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.100604  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:53.111611  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:53.111683  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:53.136465  293728 cri.go:89] found id: ""
	I1206 10:07:53.136494  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.136503  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:53.136510  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:53.136584  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:53.167397  293728 cri.go:89] found id: ""
	I1206 10:07:53.167419  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.167427  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:53.167433  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:53.167501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:53.191735  293728 cri.go:89] found id: ""
	I1206 10:07:53.191769  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.191778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:53.191784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:53.191849  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:53.216472  293728 cri.go:89] found id: ""
	I1206 10:07:53.216495  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.216506  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:53.216513  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:53.216570  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:53.242936  293728 cri.go:89] found id: ""
	I1206 10:07:53.242957  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.242966  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:53.242972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:53.243035  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:53.274015  293728 cri.go:89] found id: ""
	I1206 10:07:53.274041  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.274050  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:53.274056  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:53.274118  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:53.303348  293728 cri.go:89] found id: ""
	I1206 10:07:53.303371  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.303415  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:53.303422  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:53.303486  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:53.332691  293728 cri.go:89] found id: ""
	I1206 10:07:53.332716  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.332724  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:53.332733  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:53.332749  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:53.346274  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:53.346303  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:53.412178  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:53.412203  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:53.412216  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:53.437974  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:53.438008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.469789  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:53.469816  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:07:52.723301  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:55.222438  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:56.029614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:56.044312  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:56.044385  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:56.074035  293728 cri.go:89] found id: ""
	I1206 10:07:56.074061  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.074071  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:56.074077  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:56.074137  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:56.101362  293728 cri.go:89] found id: ""
	I1206 10:07:56.101387  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.101397  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:56.101403  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:56.101472  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:56.132837  293728 cri.go:89] found id: ""
	I1206 10:07:56.132867  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.132876  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:56.132882  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:56.132949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:56.162095  293728 cri.go:89] found id: ""
	I1206 10:07:56.162121  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.162129  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:56.162136  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:56.162195  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:56.190088  293728 cri.go:89] found id: ""
	I1206 10:07:56.190113  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.190122  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:56.190128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:56.190188  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:56.217327  293728 cri.go:89] found id: ""
	I1206 10:07:56.217355  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.217365  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:56.217372  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:56.217432  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:56.242210  293728 cri.go:89] found id: ""
	I1206 10:07:56.242246  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.242255  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:56.242261  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:56.242330  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:56.266843  293728 cri.go:89] found id: ""
	I1206 10:07:56.266871  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.266879  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:56.266888  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:56.266900  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:56.324906  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:56.324941  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:56.339074  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:56.339111  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:56.407395  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:56.407417  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:56.407434  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:56.433408  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:56.433442  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:58.962420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:58.984606  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:58.984688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:59.037604  293728 cri.go:89] found id: ""
	I1206 10:07:59.037795  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.038054  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:59.038096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:59.038236  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:59.074512  293728 cri.go:89] found id: ""
	I1206 10:07:59.074555  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.074564  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:59.074571  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:59.074638  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:59.101868  293728 cri.go:89] found id: ""
	I1206 10:07:59.101895  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.101904  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:59.101910  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:59.101973  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:59.127188  293728 cri.go:89] found id: ""
	I1206 10:07:59.127214  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.127223  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:59.127230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:59.127286  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:59.152234  293728 cri.go:89] found id: ""
	I1206 10:07:59.152259  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.152268  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:59.152274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:59.152342  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:59.177629  293728 cri.go:89] found id: ""
	I1206 10:07:59.177654  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.177663  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:59.177670  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:59.177728  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:59.202156  293728 cri.go:89] found id: ""
	I1206 10:07:59.202185  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.202195  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:59.202201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:59.202261  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:59.227130  293728 cri.go:89] found id: ""
	I1206 10:07:59.227165  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.227174  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:59.227183  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:59.227204  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:59.241522  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:59.241597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:59.311704  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:59.311730  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:59.311742  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:59.337213  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:59.337246  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:59.365911  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:59.365940  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:07:57.222678  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:59.223226  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:00.680788  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:08:00.745958  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:00.746077  293728 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:01.925540  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:01.936468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:01.936592  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:01.965164  293728 cri.go:89] found id: ""
	I1206 10:08:01.965242  293728 logs.go:282] 0 containers: []
	W1206 10:08:01.965277  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:01.965302  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:01.965393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:02.013736  293728 cri.go:89] found id: ""
	I1206 10:08:02.013774  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.013783  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:02.013790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:02.013862  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:02.058535  293728 cri.go:89] found id: ""
	I1206 10:08:02.058627  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.058651  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:02.058685  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:02.058798  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:02.091149  293728 cri.go:89] found id: ""
	I1206 10:08:02.091213  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.091242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:02.091286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:02.091460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:02.116844  293728 cri.go:89] found id: ""
	I1206 10:08:02.116870  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.116878  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:02.116884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:02.116945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:02.143338  293728 cri.go:89] found id: ""
	I1206 10:08:02.143439  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.143463  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:02.143485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:02.143573  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:02.169310  293728 cri.go:89] found id: ""
	I1206 10:08:02.169333  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.169342  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:02.169348  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:02.169410  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:02.200025  293728 cri.go:89] found id: ""
	I1206 10:08:02.200096  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.200104  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:02.200113  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:02.200125  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:02.257304  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:02.257340  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:02.271507  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:02.271541  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:02.341058  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:02.341084  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:02.341097  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:02.367636  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:02.367672  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:04.899503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:04.910154  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:04.910231  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:04.934598  293728 cri.go:89] found id: ""
	I1206 10:08:04.934623  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.934632  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:04.934638  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:04.934699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:04.959971  293728 cri.go:89] found id: ""
	I1206 10:08:04.959995  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.960004  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:04.960010  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:04.960071  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:05.027645  293728 cri.go:89] found id: ""
	I1206 10:08:05.027668  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.027677  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:05.027683  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:05.027758  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:05.077828  293728 cri.go:89] found id: ""
	I1206 10:08:05.077868  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.077878  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:05.077884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:05.077946  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:05.103986  293728 cri.go:89] found id: ""
	I1206 10:08:05.104014  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.104023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:05.104029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:05.104091  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:05.129703  293728 cri.go:89] found id: ""
	I1206 10:08:05.129778  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.129822  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:05.129843  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:05.129930  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:05.156958  293728 cri.go:89] found id: ""
	I1206 10:08:05.156982  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.156990  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:05.156996  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:05.157058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:05.182537  293728 cri.go:89] found id: ""
	I1206 10:08:05.182565  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.182575  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:05.182585  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:05.182598  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:08:01.722650  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:04.222533  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:05.196389  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:05.196419  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:05.262239  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:05.262265  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:05.262278  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:05.288138  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:05.288178  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:05.316468  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:05.316497  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:07.872986  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:07.886594  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:07.886666  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:07.912554  293728 cri.go:89] found id: ""
	I1206 10:08:07.912580  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.912589  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:07.912595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:07.912668  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:07.938006  293728 cri.go:89] found id: ""
	I1206 10:08:07.938033  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.938042  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:07.938049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:07.938107  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:07.967969  293728 cri.go:89] found id: ""
	I1206 10:08:07.967995  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.968004  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:07.968011  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:07.968079  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:08.001472  293728 cri.go:89] found id: ""
	I1206 10:08:08.001495  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.001504  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:08.001511  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:08.001577  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:08.064509  293728 cri.go:89] found id: ""
	I1206 10:08:08.064538  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.064547  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:08.064554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:08.064612  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:08.094308  293728 cri.go:89] found id: ""
	I1206 10:08:08.094376  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.094402  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:08.094434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:08.094522  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:08.124650  293728 cri.go:89] found id: ""
	I1206 10:08:08.124695  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.124705  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:08.124712  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:08.124782  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:08.150816  293728 cri.go:89] found id: ""
	I1206 10:08:08.150851  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.150860  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:08.150868  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:08.150879  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:08.207170  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:08.207203  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:08.220834  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:08.220860  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:08.285113  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:08.285138  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:08.285153  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:08.311342  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:08.311548  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:09.310714  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:08:09.371609  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:09.371709  293728 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 10:08:06.222644  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:08.722561  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:10.840228  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:10.850847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:10.850914  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:10.881439  293728 cri.go:89] found id: ""
	I1206 10:08:10.881517  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.881540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:10.881555  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:10.881629  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:10.910942  293728 cri.go:89] found id: ""
	I1206 10:08:10.910971  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.910980  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:10.910987  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:10.911049  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:10.936471  293728 cri.go:89] found id: ""
	I1206 10:08:10.936495  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.936503  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:10.936509  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:10.936566  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:10.964540  293728 cri.go:89] found id: ""
	I1206 10:08:10.964567  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.964575  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:10.964581  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:10.964650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:11.035295  293728 cri.go:89] found id: ""
	I1206 10:08:11.035322  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.035332  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:11.035354  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:11.035433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:11.081240  293728 cri.go:89] found id: ""
	I1206 10:08:11.081266  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.081275  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:11.081282  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:11.081347  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:11.109502  293728 cri.go:89] found id: ""
	I1206 10:08:11.109543  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.109554  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:11.109561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:11.109625  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:11.138072  293728 cri.go:89] found id: ""
	I1206 10:08:11.138100  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.138113  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:11.138122  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:11.138134  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:11.207996  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:11.208060  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:11.208081  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:11.234490  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:11.234525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:11.263495  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:11.263525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:11.323991  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:11.324034  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:13.838014  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:13.849112  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:13.849181  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:13.873403  293728 cri.go:89] found id: ""
	I1206 10:08:13.873472  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.873498  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:13.873515  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:13.873602  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:13.900596  293728 cri.go:89] found id: ""
	I1206 10:08:13.900616  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.900625  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:13.900631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:13.900694  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:13.925385  293728 cri.go:89] found id: ""
	I1206 10:08:13.925409  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.925417  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:13.925424  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:13.925481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:13.950796  293728 cri.go:89] found id: ""
	I1206 10:08:13.950823  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.950837  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:13.950844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:13.950902  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:14.028934  293728 cri.go:89] found id: ""
	I1206 10:08:14.028964  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.028973  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:14.028979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:14.029058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:14.063925  293728 cri.go:89] found id: ""
	I1206 10:08:14.063948  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.063957  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:14.063963  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:14.064024  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:14.091439  293728 cri.go:89] found id: ""
	I1206 10:08:14.091465  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.091473  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:14.091480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:14.091556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:14.116453  293728 cri.go:89] found id: ""
	I1206 10:08:14.116476  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.116485  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:14.116494  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:14.116506  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:14.173576  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:14.173615  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:14.187707  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:14.187736  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:14.256417  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:14.256440  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:14.256452  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:14.281458  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:14.281490  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:10.722908  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:13.223465  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:16.809300  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:16.820406  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:16.820481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:16.845040  293728 cri.go:89] found id: ""
	I1206 10:08:16.845105  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.845130  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:16.845144  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:16.845217  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:16.875450  293728 cri.go:89] found id: ""
	I1206 10:08:16.875475  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.875484  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:16.875500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:16.875562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:16.902002  293728 cri.go:89] found id: ""
	I1206 10:08:16.902048  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.902059  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:16.902068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:16.902146  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:16.927319  293728 cri.go:89] found id: ""
	I1206 10:08:16.927353  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.927361  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:16.927368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:16.927466  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:16.952239  293728 cri.go:89] found id: ""
	I1206 10:08:16.952265  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.952273  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:16.952280  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:16.952386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:16.994322  293728 cri.go:89] found id: ""
	I1206 10:08:16.994351  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.994360  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:16.994368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:16.994437  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:17.032079  293728 cri.go:89] found id: ""
	I1206 10:08:17.032113  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.032122  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:17.032128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:17.032201  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:17.079256  293728 cri.go:89] found id: ""
	I1206 10:08:17.079321  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.079343  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:17.079364  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:17.079406  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:17.104677  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:17.104707  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:17.136676  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:17.136701  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:17.195915  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:17.195950  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:17.209626  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:17.209653  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:17.278745  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:19.780767  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:19.791658  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:19.791756  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:19.820516  293728 cri.go:89] found id: ""
	I1206 10:08:19.820539  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.820547  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:19.820554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:19.820652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:19.845473  293728 cri.go:89] found id: ""
	I1206 10:08:19.845499  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.845507  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:19.845514  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:19.845572  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:19.871555  293728 cri.go:89] found id: ""
	I1206 10:08:19.871580  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.871592  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:19.871598  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:19.871658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:19.902754  293728 cri.go:89] found id: ""
	I1206 10:08:19.902778  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.902787  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:19.902793  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:19.902853  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:19.927447  293728 cri.go:89] found id: ""
	I1206 10:08:19.927473  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.927482  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:19.927489  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:19.927549  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:19.951607  293728 cri.go:89] found id: ""
	I1206 10:08:19.951634  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.951644  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:19.951651  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:19.951718  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:20.023839  293728 cri.go:89] found id: ""
	I1206 10:08:20.023868  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.023879  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:20.023886  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:20.023951  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:20.064702  293728 cri.go:89] found id: ""
	I1206 10:08:20.064730  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.064739  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:20.064748  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:20.064761  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:20.131531  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:20.131555  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:20.131566  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:20.157955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:20.157991  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:20.188100  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:20.188126  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:15.723287  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:18.223318  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:20.248399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:20.248437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:22.762476  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:22.774338  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:22.774408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:22.803197  293728 cri.go:89] found id: ""
	I1206 10:08:22.803220  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.803228  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:22.803234  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:22.803292  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:22.828985  293728 cri.go:89] found id: ""
	I1206 10:08:22.829009  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.829018  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:22.829024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:22.829084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:22.857670  293728 cri.go:89] found id: ""
	I1206 10:08:22.857695  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.857704  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:22.857710  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:22.857770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:22.886863  293728 cri.go:89] found id: ""
	I1206 10:08:22.886889  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.886898  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:22.886905  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:22.886967  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:22.912046  293728 cri.go:89] found id: ""
	I1206 10:08:22.912072  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.912080  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:22.912086  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:22.912149  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:22.940438  293728 cri.go:89] found id: ""
	I1206 10:08:22.940516  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.940530  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:22.940538  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:22.940597  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:22.965932  293728 cri.go:89] found id: ""
	I1206 10:08:22.965957  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.965966  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:22.965973  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:22.966034  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:23.036167  293728 cri.go:89] found id: ""
	I1206 10:08:23.036194  293728 logs.go:282] 0 containers: []
	W1206 10:08:23.036203  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:23.036212  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:23.036224  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:23.054454  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:23.054481  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:23.120660  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:23.120680  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:23.120692  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:23.146879  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:23.146913  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:23.177356  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:23.177389  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:20.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:23.222550  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:25.739842  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:25.751155  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:25.751238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:25.781790  293728 cri.go:89] found id: ""
	I1206 10:08:25.781813  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.781821  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:25.781828  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:25.781884  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:25.809915  293728 cri.go:89] found id: ""
	I1206 10:08:25.809940  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.809948  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:25.809954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:25.810014  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:25.840293  293728 cri.go:89] found id: ""
	I1206 10:08:25.840318  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.840327  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:25.840334  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:25.840390  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:25.869368  293728 cri.go:89] found id: ""
	I1206 10:08:25.869401  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.869410  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:25.869416  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:25.869488  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:25.898302  293728 cri.go:89] found id: ""
	I1206 10:08:25.898335  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.898344  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:25.898351  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:25.898417  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:25.925837  293728 cri.go:89] found id: ""
	I1206 10:08:25.925864  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.925873  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:25.925880  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:25.925940  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:25.950501  293728 cri.go:89] found id: ""
	I1206 10:08:25.950537  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.950546  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:25.950552  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:25.950618  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:26.003264  293728 cri.go:89] found id: ""
	I1206 10:08:26.003294  293728 logs.go:282] 0 containers: []
	W1206 10:08:26.003305  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:26.003316  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:26.003327  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:26.046472  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:26.046503  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:26.091770  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:26.091798  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:26.148719  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:26.148755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:26.165689  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:26.165733  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:26.231230  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:26.245490  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:08:26.310812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:26.310914  293728 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:26.314238  293728 out.go:179] * Enabled addons: 
	I1206 10:08:26.317143  293728 addons.go:530] duration metric: took 1m53.881766525s for enable addons: enabled=[]
	I1206 10:08:28.731518  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:28.742380  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:28.742460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:28.768392  293728 cri.go:89] found id: ""
	I1206 10:08:28.768416  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.768425  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:28.768431  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:28.768489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:28.795017  293728 cri.go:89] found id: ""
	I1206 10:08:28.795043  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.795052  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:28.795059  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:28.795130  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:28.831707  293728 cri.go:89] found id: ""
	I1206 10:08:28.831734  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.831742  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:28.831748  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:28.831807  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:28.857267  293728 cri.go:89] found id: ""
	I1206 10:08:28.857293  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.857304  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:28.857317  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:28.857415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:28.887732  293728 cri.go:89] found id: ""
	I1206 10:08:28.887754  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.887762  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:28.887769  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:28.887827  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:28.912905  293728 cri.go:89] found id: ""
	I1206 10:08:28.912970  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.912984  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:28.912992  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:28.913051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:28.937740  293728 cri.go:89] found id: ""
	I1206 10:08:28.937764  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.937774  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:28.937781  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:28.937840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:28.964042  293728 cri.go:89] found id: ""
	I1206 10:08:28.964111  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.964126  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:28.964135  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:28.964147  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:29.034399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:29.034439  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:29.059150  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:29.059176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:29.134200  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:29.134222  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:29.134235  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:29.160868  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:29.160901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:25.722683  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:27.723593  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:30.222645  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:31.689201  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:31.700497  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:31.700569  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:31.726402  293728 cri.go:89] found id: ""
	I1206 10:08:31.726426  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.726434  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:31.726441  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:31.726503  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:31.752620  293728 cri.go:89] found id: ""
	I1206 10:08:31.752644  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.752652  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:31.752659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:31.752720  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:31.778722  293728 cri.go:89] found id: ""
	I1206 10:08:31.778749  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.778758  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:31.778764  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:31.778825  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:31.804730  293728 cri.go:89] found id: ""
	I1206 10:08:31.804754  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.804762  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:31.804768  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:31.804828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:31.834276  293728 cri.go:89] found id: ""
	I1206 10:08:31.834303  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.834312  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:31.834322  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:31.834388  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:31.859721  293728 cri.go:89] found id: ""
	I1206 10:08:31.859744  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.859752  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:31.859759  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:31.859889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:31.888679  293728 cri.go:89] found id: ""
	I1206 10:08:31.888746  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.888760  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:31.888767  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:31.888828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:31.915769  293728 cri.go:89] found id: ""
	I1206 10:08:31.915794  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.915804  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:31.915812  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:31.915825  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:31.929129  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:31.929155  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:32.017380  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:32.017406  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:32.017420  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:32.046135  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:32.046218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:32.081462  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:32.081485  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.642406  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:34.653187  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:34.653263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:34.683091  293728 cri.go:89] found id: ""
	I1206 10:08:34.683116  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.683124  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:34.683130  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:34.683189  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:34.709426  293728 cri.go:89] found id: ""
	I1206 10:08:34.709453  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.709462  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:34.709468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:34.709528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:34.740189  293728 cri.go:89] found id: ""
	I1206 10:08:34.740215  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.740223  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:34.740230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:34.740289  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:34.769902  293728 cri.go:89] found id: ""
	I1206 10:08:34.769932  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.769942  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:34.769954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:34.770026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:34.797331  293728 cri.go:89] found id: ""
	I1206 10:08:34.797358  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.797367  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:34.797374  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:34.797434  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:34.823286  293728 cri.go:89] found id: ""
	I1206 10:08:34.823309  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.823318  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:34.823324  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:34.823406  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:34.849130  293728 cri.go:89] found id: ""
	I1206 10:08:34.849153  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.849162  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:34.849168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:34.849229  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:34.873883  293728 cri.go:89] found id: ""
	I1206 10:08:34.873905  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.873913  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:34.873922  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:34.873933  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.929942  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:34.929976  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:34.944124  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:34.944205  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:35.057155  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:35.057180  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:35.057193  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:35.090699  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:35.090741  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:32.223260  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:34.723506  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:37.620713  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:37.631409  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:37.631478  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:37.668926  293728 cri.go:89] found id: ""
	I1206 10:08:37.668949  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.668958  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:37.668966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:37.669025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:37.698809  293728 cri.go:89] found id: ""
	I1206 10:08:37.698831  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.698840  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:37.698846  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:37.698905  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:37.726123  293728 cri.go:89] found id: ""
	I1206 10:08:37.726146  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.726155  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:37.726161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:37.726219  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:37.750745  293728 cri.go:89] found id: ""
	I1206 10:08:37.750818  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.750842  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:37.750861  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:37.750945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:37.777744  293728 cri.go:89] found id: ""
	I1206 10:08:37.777814  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.777837  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:37.777857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:37.777945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:37.804124  293728 cri.go:89] found id: ""
	I1206 10:08:37.804151  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.804160  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:37.804166  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:37.804243  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:37.828930  293728 cri.go:89] found id: ""
	I1206 10:08:37.828995  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.829010  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:37.829017  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:37.829076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:37.853436  293728 cri.go:89] found id: ""
	I1206 10:08:37.853459  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.853468  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:37.853476  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:37.853493  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:37.910673  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:37.910709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:37.926464  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:37.926504  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:38.046192  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:38.046217  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:38.046230  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:38.078770  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:38.078805  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:37.222544  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:39.222587  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:40.613605  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:40.624180  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:40.624256  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:40.648680  293728 cri.go:89] found id: ""
	I1206 10:08:40.648706  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.648715  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:40.648721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:40.648783  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:40.674691  293728 cri.go:89] found id: ""
	I1206 10:08:40.674716  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.674725  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:40.674732  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:40.674802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:40.700970  293728 cri.go:89] found id: ""
	I1206 10:08:40.700997  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.701006  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:40.701013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:40.701076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:40.729911  293728 cri.go:89] found id: ""
	I1206 10:08:40.729940  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.729949  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:40.729956  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:40.730020  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:40.755581  293728 cri.go:89] found id: ""
	I1206 10:08:40.755611  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.755620  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:40.755626  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:40.755686  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:40.781938  293728 cri.go:89] found id: ""
	I1206 10:08:40.782007  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.782030  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:40.782051  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:40.782139  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:40.811855  293728 cri.go:89] found id: ""
	I1206 10:08:40.811880  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.811889  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:40.811895  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:40.811961  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:40.841527  293728 cri.go:89] found id: ""
	I1206 10:08:40.841553  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.841562  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:40.841571  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:40.841583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:40.854956  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:40.854983  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:40.924783  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:40.924807  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:40.924823  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:40.950611  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:40.950646  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:41.021978  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:41.022008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:43.596447  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:43.607463  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:43.607540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:43.632638  293728 cri.go:89] found id: ""
	I1206 10:08:43.632660  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.632668  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:43.632675  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:43.632737  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:43.657538  293728 cri.go:89] found id: ""
	I1206 10:08:43.657616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.657632  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:43.657639  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:43.657711  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:43.683595  293728 cri.go:89] found id: ""
	I1206 10:08:43.683621  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.683630  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:43.683636  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:43.683706  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:43.709348  293728 cri.go:89] found id: ""
	I1206 10:08:43.709371  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.709380  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:43.709387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:43.709451  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:43.734592  293728 cri.go:89] found id: ""
	I1206 10:08:43.734616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.734625  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:43.734631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:43.734689  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:43.761297  293728 cri.go:89] found id: ""
	I1206 10:08:43.761362  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.761387  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:43.761405  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:43.761493  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:43.789795  293728 cri.go:89] found id: ""
	I1206 10:08:43.789831  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.789840  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:43.789847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:43.789919  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:43.817708  293728 cri.go:89] found id: ""
	I1206 10:08:43.817735  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.817744  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:43.817762  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:43.817774  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:43.831448  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:43.831483  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:43.897033  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:43.897107  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:43.897131  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:43.922955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:43.922990  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:43.960423  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:43.960457  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:41.722543  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:43.723229  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:46.534389  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:46.545120  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:46.545205  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:46.570287  293728 cri.go:89] found id: ""
	I1206 10:08:46.570313  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.570322  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:46.570328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:46.570391  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:46.600524  293728 cri.go:89] found id: ""
	I1206 10:08:46.600609  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.600631  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:46.600650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:46.600734  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:46.627292  293728 cri.go:89] found id: ""
	I1206 10:08:46.627314  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.627322  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:46.627328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:46.627424  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:46.652620  293728 cri.go:89] found id: ""
	I1206 10:08:46.652642  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.652651  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:46.652657  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:46.652716  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:46.681992  293728 cri.go:89] found id: ""
	I1206 10:08:46.682015  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.682023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:46.682029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:46.682087  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:46.708290  293728 cri.go:89] found id: ""
	I1206 10:08:46.708363  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.708408  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:46.708434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:46.708528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:46.737816  293728 cri.go:89] found id: ""
	I1206 10:08:46.737890  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.737915  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:46.737935  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:46.738021  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:46.768334  293728 cri.go:89] found id: ""
	I1206 10:08:46.768407  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.768430  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:46.768451  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:46.768491  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:46.782268  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:46.782344  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:46.850687  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:46.850714  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:46.850727  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:46.877310  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:46.877362  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:46.909345  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:46.909376  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:49.467346  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:49.477899  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:49.477971  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:49.502546  293728 cri.go:89] found id: ""
	I1206 10:08:49.502569  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.502578  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:49.502584  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:49.502646  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:49.527592  293728 cri.go:89] found id: ""
	I1206 10:08:49.527663  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.527686  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:49.527699  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:49.527760  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:49.553748  293728 cri.go:89] found id: ""
	I1206 10:08:49.553770  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.553778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:49.553784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:49.553841  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:49.580182  293728 cri.go:89] found id: ""
	I1206 10:08:49.580205  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.580214  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:49.580220  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:49.580285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:49.609009  293728 cri.go:89] found id: ""
	I1206 10:08:49.609034  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.609043  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:49.609050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:49.609114  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:49.634196  293728 cri.go:89] found id: ""
	I1206 10:08:49.634218  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.634227  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:49.634233  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:49.634293  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:49.660015  293728 cri.go:89] found id: ""
	I1206 10:08:49.660038  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.660047  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:49.660053  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:49.660115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:49.685329  293728 cri.go:89] found id: ""
	I1206 10:08:49.685355  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.685364  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:49.685373  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:49.685385  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:49.699189  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:49.699218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:49.768229  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:49.768253  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:49.768267  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:49.794221  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:49.794255  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:49.825320  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:49.825349  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:46.222859  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:48.223148  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:50.223492  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:52.381962  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:52.392897  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:52.392974  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:52.421172  293728 cri.go:89] found id: ""
	I1206 10:08:52.421197  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.421206  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:52.421212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:52.421276  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:52.449281  293728 cri.go:89] found id: ""
	I1206 10:08:52.449305  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.449313  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:52.449320  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:52.449378  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:52.474517  293728 cri.go:89] found id: ""
	I1206 10:08:52.474539  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.474547  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:52.474553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:52.474616  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:52.500435  293728 cri.go:89] found id: ""
	I1206 10:08:52.500458  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.500466  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:52.500473  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:52.500532  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:52.526935  293728 cri.go:89] found id: ""
	I1206 10:08:52.526957  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.526965  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:52.526972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:52.527031  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:52.553625  293728 cri.go:89] found id: ""
	I1206 10:08:52.553646  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.553654  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:52.553663  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:52.553721  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:52.580092  293728 cri.go:89] found id: ""
	I1206 10:08:52.580169  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.580194  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:52.580206  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:52.580269  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:52.609595  293728 cri.go:89] found id: ""
	I1206 10:08:52.609622  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.609631  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:52.609640  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:52.609658  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:52.666423  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:52.666460  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:52.680542  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:52.680572  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:52.745123  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:52.745142  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:52.745154  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:52.771578  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:52.771612  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:52.722479  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:54.722588  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:57.222560  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:58.722277  287962 node_ready.go:38] duration metric: took 6m0.000230261s for node "no-preload-257359" to be "Ready" ...
	I1206 10:08:58.725649  287962 out.go:203] 
	W1206 10:08:58.728547  287962 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:08:58.728572  287962 out.go:285] * 
	W1206 10:08:58.730704  287962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:08:58.733695  287962 out.go:203] 
	I1206 10:08:55.300596  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:55.311733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:55.311837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:55.337436  293728 cri.go:89] found id: ""
	I1206 10:08:55.337466  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.337475  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:55.337482  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:55.337557  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:55.362426  293728 cri.go:89] found id: ""
	I1206 10:08:55.362449  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.362457  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:55.362462  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:55.362539  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:55.388462  293728 cri.go:89] found id: ""
	I1206 10:08:55.388488  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.388497  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:55.388503  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:55.388567  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:55.417368  293728 cri.go:89] found id: ""
	I1206 10:08:55.417391  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.417400  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:55.417406  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:55.417465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:55.444014  293728 cri.go:89] found id: ""
	I1206 10:08:55.444052  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.444061  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:55.444067  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:55.444126  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:55.473384  293728 cri.go:89] found id: ""
	I1206 10:08:55.473408  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.473417  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:55.473423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:55.473485  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:55.499095  293728 cri.go:89] found id: ""
	I1206 10:08:55.499119  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.499128  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:55.499134  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:55.499193  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:55.530488  293728 cri.go:89] found id: ""
	I1206 10:08:55.530560  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.530585  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:55.530607  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:55.530642  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:55.543996  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:55.544023  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:55.609232  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:55.600433    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.601179    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.602847    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.603477    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.605074    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:55.600433    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.601179    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.602847    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.603477    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.605074    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:55.609295  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:55.609315  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:55.635259  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:55.635292  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:55.663234  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:55.663263  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:58.219942  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:58.240184  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:58.240251  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:58.288171  293728 cri.go:89] found id: ""
	I1206 10:08:58.288193  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.288201  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:58.288208  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:58.288267  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:58.326999  293728 cri.go:89] found id: ""
	I1206 10:08:58.327020  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.327029  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:58.327035  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:58.327104  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:58.354289  293728 cri.go:89] found id: ""
	I1206 10:08:58.354316  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.354325  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:58.354331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:58.354392  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:58.378166  293728 cri.go:89] found id: ""
	I1206 10:08:58.378195  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.378204  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:58.378210  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:58.378270  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:58.405700  293728 cri.go:89] found id: ""
	I1206 10:08:58.405721  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.405734  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:58.405740  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:58.405800  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:58.430772  293728 cri.go:89] found id: ""
	I1206 10:08:58.430800  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.430809  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:58.430816  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:58.430882  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:58.455749  293728 cri.go:89] found id: ""
	I1206 10:08:58.455777  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.455787  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:58.455793  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:58.455854  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:58.480448  293728 cri.go:89] found id: ""
	I1206 10:08:58.480491  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.480502  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:58.480512  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:58.480527  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:58.536659  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:58.536697  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:58.550566  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:58.550589  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:58.618059  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:58.608926    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.609448    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.611304    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.612003    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.613723    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:58.608926    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.609448    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.611304    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.612003    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.613723    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:58.618081  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:58.618093  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:58.643111  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:58.643142  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:01.172811  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:01.189894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:01.189970  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:01.216506  293728 cri.go:89] found id: ""
	I1206 10:09:01.216533  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.216542  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:01.216549  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:01.216610  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:01.248643  293728 cri.go:89] found id: ""
	I1206 10:09:01.248667  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.248675  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:01.248681  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:01.248754  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:01.282778  293728 cri.go:89] found id: ""
	I1206 10:09:01.282799  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.282808  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:01.282814  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:01.282874  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:01.317892  293728 cri.go:89] found id: ""
	I1206 10:09:01.317914  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.317923  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:01.317929  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:01.317996  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:01.344569  293728 cri.go:89] found id: ""
	I1206 10:09:01.344596  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.344606  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:01.344612  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:01.344675  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:01.374785  293728 cri.go:89] found id: ""
	I1206 10:09:01.374812  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.374822  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:01.374829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:01.374913  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:01.399962  293728 cri.go:89] found id: ""
	I1206 10:09:01.399986  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.399995  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:01.400001  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:01.400120  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:01.426824  293728 cri.go:89] found id: ""
	I1206 10:09:01.426850  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.426859  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:01.426877  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:01.426904  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:01.484968  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:01.485001  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:01.506470  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:01.506550  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:01.586157  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:01.577286    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.578043    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.579813    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.580524    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.582153    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:01.577286    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.578043    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.579813    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.580524    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.582153    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:01.586226  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:01.586241  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:01.616859  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:01.617050  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:04.147855  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:04.161529  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:04.161601  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:04.185793  293728 cri.go:89] found id: ""
	I1206 10:09:04.185817  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.185826  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:04.185832  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:04.185893  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:04.213785  293728 cri.go:89] found id: ""
	I1206 10:09:04.213809  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.213818  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:04.213824  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:04.213886  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:04.245746  293728 cri.go:89] found id: ""
	I1206 10:09:04.245769  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.245778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:04.245784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:04.245844  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:04.276836  293728 cri.go:89] found id: ""
	I1206 10:09:04.276864  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.276873  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:04.276879  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:04.276949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:04.307027  293728 cri.go:89] found id: ""
	I1206 10:09:04.307054  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.307089  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:04.307096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:04.307171  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:04.332480  293728 cri.go:89] found id: ""
	I1206 10:09:04.332503  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.332511  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:04.332518  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:04.332580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:04.359083  293728 cri.go:89] found id: ""
	I1206 10:09:04.359105  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.359113  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:04.359119  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:04.359178  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:04.384459  293728 cri.go:89] found id: ""
	I1206 10:09:04.384527  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.384560  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:04.384576  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:04.384589  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:04.398476  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:04.398508  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:04.464529  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:04.455141    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.455968    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.457782    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.458361    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.459895    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:04.455141    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.455968    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.457782    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.458361    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.459895    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:04.464551  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:04.464564  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:04.493800  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:04.493842  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:04.533422  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:04.533455  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:07.095340  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:07.106226  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:07.106321  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:07.133785  293728 cri.go:89] found id: ""
	I1206 10:09:07.133849  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.133886  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:07.133907  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:07.133972  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:07.169905  293728 cri.go:89] found id: ""
	I1206 10:09:07.169932  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.169957  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:07.169964  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:07.170039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:07.198212  293728 cri.go:89] found id: ""
	I1206 10:09:07.198285  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.198309  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:07.198329  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:07.198499  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:07.236730  293728 cri.go:89] found id: ""
	I1206 10:09:07.236809  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.236842  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:07.236862  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:07.236969  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:07.264908  293728 cri.go:89] found id: ""
	I1206 10:09:07.264984  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.265015  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:07.265037  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:07.265147  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:07.293030  293728 cri.go:89] found id: ""
	I1206 10:09:07.293102  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.293125  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:07.293146  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:07.293253  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:07.320479  293728 cri.go:89] found id: ""
	I1206 10:09:07.320542  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.320572  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:07.320600  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:07.320712  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:07.346369  293728 cri.go:89] found id: ""
	I1206 10:09:07.346431  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.346461  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:07.346486  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:07.346524  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:07.375165  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:07.375244  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:07.433189  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:07.433225  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:07.447472  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:07.447500  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:07.536150  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:07.524233    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.525315    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527184    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527855    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.532128    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:07.524233    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.525315    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527184    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527855    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.532128    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:07.536173  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:07.536186  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:10.062333  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:10.073694  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:10.073767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:10.101307  293728 cri.go:89] found id: ""
	I1206 10:09:10.101330  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.101339  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:10.101346  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:10.101413  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:10.128394  293728 cri.go:89] found id: ""
	I1206 10:09:10.128420  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.128428  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:10.128436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:10.128497  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:10.154510  293728 cri.go:89] found id: ""
	I1206 10:09:10.154536  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.154545  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:10.154552  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:10.154611  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:10.179782  293728 cri.go:89] found id: ""
	I1206 10:09:10.179808  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.179816  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:10.179822  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:10.179888  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:10.210072  293728 cri.go:89] found id: ""
	I1206 10:09:10.210142  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.210171  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:10.210201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:10.210315  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:10.245657  293728 cri.go:89] found id: ""
	I1206 10:09:10.245676  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.245684  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:10.245691  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:10.245748  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:10.282232  293728 cri.go:89] found id: ""
	I1206 10:09:10.282305  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.282345  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:10.282365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:10.282454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:10.313160  293728 cri.go:89] found id: ""
	I1206 10:09:10.313225  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.313239  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:10.313249  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:10.313261  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:10.373196  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:10.373230  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:10.386792  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:10.386819  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:10.450525  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:10.442280    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.442968    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.444545    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.445040    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.446664    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:10.442280    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.442968    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.444545    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.445040    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.446664    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:10.450547  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:10.450560  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:10.476832  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:10.476869  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:13.012652  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:13.023659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:13.023732  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:13.047365  293728 cri.go:89] found id: ""
	I1206 10:09:13.047458  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.047473  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:13.047480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:13.047541  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:13.072937  293728 cri.go:89] found id: ""
	I1206 10:09:13.072961  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.072970  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:13.072987  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:13.073048  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:13.097439  293728 cri.go:89] found id: ""
	I1206 10:09:13.097515  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.097531  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:13.097539  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:13.097600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:13.123273  293728 cri.go:89] found id: ""
	I1206 10:09:13.123307  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.123316  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:13.123323  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:13.123426  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:13.149441  293728 cri.go:89] found id: ""
	I1206 10:09:13.149518  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.149534  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:13.149542  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:13.149608  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:13.174275  293728 cri.go:89] found id: ""
	I1206 10:09:13.174298  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.174306  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:13.174313  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:13.174379  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:13.203852  293728 cri.go:89] found id: ""
	I1206 10:09:13.203926  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.203942  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:13.203951  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:13.204013  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:13.237842  293728 cri.go:89] found id: ""
	I1206 10:09:13.237866  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.237875  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:13.237884  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:13.237899  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:13.305042  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:13.305078  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:13.319151  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:13.319178  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:13.383092  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:13.374391    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.375129    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.376927    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.377619    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.379235    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:13.374391    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.375129    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.376927    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.377619    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.379235    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:13.383112  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:13.383123  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:13.409266  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:13.409295  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:15.937340  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:15.948165  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:15.948296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:15.973427  293728 cri.go:89] found id: ""
	I1206 10:09:15.973452  293728 logs.go:282] 0 containers: []
	W1206 10:09:15.973461  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:15.973467  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:15.973529  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:16.006761  293728 cri.go:89] found id: ""
	I1206 10:09:16.006806  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.006816  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:16.006824  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:16.006907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:16.034447  293728 cri.go:89] found id: ""
	I1206 10:09:16.034483  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.034492  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:16.034499  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:16.034572  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:16.060884  293728 cri.go:89] found id: ""
	I1206 10:09:16.060955  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.060972  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:16.060979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:16.061039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:16.090437  293728 cri.go:89] found id: ""
	I1206 10:09:16.090461  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.090470  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:16.090476  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:16.090548  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:16.118175  293728 cri.go:89] found id: ""
	I1206 10:09:16.118201  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.118209  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:16.118222  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:16.118342  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:16.144978  293728 cri.go:89] found id: ""
	I1206 10:09:16.145005  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.145015  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:16.145021  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:16.145083  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:16.169350  293728 cri.go:89] found id: ""
	I1206 10:09:16.169378  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.169392  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:16.169401  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:16.169412  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:16.228680  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:16.228755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:16.243103  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:16.243179  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:16.316618  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:16.307974    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.308682    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310238    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310832    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.312513    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:16.307974    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.308682    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310238    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310832    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.312513    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:16.316645  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:16.316658  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:16.342620  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:16.342651  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:18.872579  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:18.883111  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:18.883184  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:18.909365  293728 cri.go:89] found id: ""
	I1206 10:09:18.909393  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.909402  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:18.909410  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:18.909480  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:18.933714  293728 cri.go:89] found id: ""
	I1206 10:09:18.933737  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.933746  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:18.933752  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:18.933811  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:18.963141  293728 cri.go:89] found id: ""
	I1206 10:09:18.963206  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.963228  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:18.963245  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:18.963333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:18.988486  293728 cri.go:89] found id: ""
	I1206 10:09:18.988511  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.988519  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:18.988526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:18.988604  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:19.020422  293728 cri.go:89] found id: ""
	I1206 10:09:19.020448  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.020456  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:19.020463  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:19.020543  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:19.045103  293728 cri.go:89] found id: ""
	I1206 10:09:19.045164  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.045179  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:19.045186  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:19.045245  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:19.069289  293728 cri.go:89] found id: ""
	I1206 10:09:19.069322  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.069331  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:19.069337  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:19.069403  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:19.094504  293728 cri.go:89] found id: ""
	I1206 10:09:19.094539  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.094547  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:19.094557  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:19.094569  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:19.108440  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:19.108469  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:19.175508  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:19.166822    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.167472    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169260    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169788    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.171507    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:19.166822    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.167472    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169260    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169788    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.171507    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:19.175529  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:19.175542  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:19.201390  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:19.201424  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:19.243342  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:19.243364  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:21.808230  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:21.818876  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:21.818955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:21.848634  293728 cri.go:89] found id: ""
	I1206 10:09:21.848655  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.848663  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:21.848669  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:21.848728  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:21.872798  293728 cri.go:89] found id: ""
	I1206 10:09:21.872861  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.872875  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:21.872882  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:21.872938  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:21.900148  293728 cri.go:89] found id: ""
	I1206 10:09:21.900174  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.900183  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:21.900190  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:21.900250  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:21.924786  293728 cri.go:89] found id: ""
	I1206 10:09:21.924813  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.924822  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:21.924829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:21.924915  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:21.954178  293728 cri.go:89] found id: ""
	I1206 10:09:21.954212  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.954221  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:21.954227  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:21.954296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:21.979818  293728 cri.go:89] found id: ""
	I1206 10:09:21.979842  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.979850  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:21.979857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:21.979916  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:22.008409  293728 cri.go:89] found id: ""
	I1206 10:09:22.008435  293728 logs.go:282] 0 containers: []
	W1206 10:09:22.008445  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:22.008452  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:22.008527  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:22.035338  293728 cri.go:89] found id: ""
	I1206 10:09:22.035363  293728 logs.go:282] 0 containers: []
	W1206 10:09:22.035396  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:22.035407  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:22.035418  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:22.091435  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:22.091472  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:22.105532  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:22.105567  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:22.171773  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:22.163104    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.163868    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.165557    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.166181    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.167828    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:22.163104    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.163868    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.165557    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.166181    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.167828    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:22.171793  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:22.171806  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:22.197667  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:22.197706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:24.735529  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:24.748375  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:24.748558  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:24.788906  293728 cri.go:89] found id: ""
	I1206 10:09:24.788978  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.789002  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:24.789024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:24.789113  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:24.818364  293728 cri.go:89] found id: ""
	I1206 10:09:24.818431  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.818453  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:24.818472  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:24.818564  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:24.845760  293728 cri.go:89] found id: ""
	I1206 10:09:24.845802  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.845811  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:24.845817  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:24.845889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:24.872973  293728 cri.go:89] found id: ""
	I1206 10:09:24.872997  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.873006  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:24.873012  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:24.873076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:24.902758  293728 cri.go:89] found id: ""
	I1206 10:09:24.902791  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.902801  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:24.902809  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:24.902885  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:24.929539  293728 cri.go:89] found id: ""
	I1206 10:09:24.929565  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.929575  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:24.929582  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:24.929665  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:24.955731  293728 cri.go:89] found id: ""
	I1206 10:09:24.955806  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.955822  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:24.955829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:24.955891  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:24.980673  293728 cri.go:89] found id: ""
	I1206 10:09:24.980704  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.980713  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:24.980722  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:24.980734  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:25.017868  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:25.017899  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:25.077472  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:25.077510  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:25.093107  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:25.093139  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:25.164597  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:25.155645    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.156390    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158149    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158952    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.160572    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:25.155645    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.156390    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158149    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158952    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.160572    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:25.164635  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:25.164649  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:27.694118  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:27.704932  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:27.705013  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:27.734684  293728 cri.go:89] found id: ""
	I1206 10:09:27.734762  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.734784  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:27.734802  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:27.734892  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:27.771355  293728 cri.go:89] found id: ""
	I1206 10:09:27.771442  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.771466  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:27.771485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:27.771568  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:27.800742  293728 cri.go:89] found id: ""
	I1206 10:09:27.800818  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.800836  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:27.800844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:27.800907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:27.827029  293728 cri.go:89] found id: ""
	I1206 10:09:27.827058  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.827068  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:27.827075  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:27.827136  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:27.853299  293728 cri.go:89] found id: ""
	I1206 10:09:27.853323  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.853332  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:27.853339  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:27.853431  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:27.878371  293728 cri.go:89] found id: ""
	I1206 10:09:27.878394  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.878402  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:27.878415  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:27.878525  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:27.903247  293728 cri.go:89] found id: ""
	I1206 10:09:27.903269  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.903277  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:27.903283  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:27.903405  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:27.927665  293728 cri.go:89] found id: ""
	I1206 10:09:27.927687  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.927695  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:27.927703  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:27.927714  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:27.993787  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:27.984910    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.985739    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.987460    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.988125    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.989907    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:27.984910    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.985739    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.987460    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.988125    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.989907    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:27.993808  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:27.993820  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:28.021097  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:28.021132  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:28.050410  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:28.050438  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:28.108602  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:28.108636  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:30.622836  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:30.633282  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:30.633354  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:30.657827  293728 cri.go:89] found id: ""
	I1206 10:09:30.657850  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.657859  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:30.657865  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:30.657929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:30.685495  293728 cri.go:89] found id: ""
	I1206 10:09:30.685525  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.685534  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:30.685541  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:30.685611  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:30.710543  293728 cri.go:89] found id: ""
	I1206 10:09:30.710576  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.710585  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:30.710591  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:30.710661  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:30.738572  293728 cri.go:89] found id: ""
	I1206 10:09:30.738667  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.738690  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:30.738710  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:30.738815  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:30.782603  293728 cri.go:89] found id: ""
	I1206 10:09:30.782684  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.782706  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:30.782725  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:30.782829  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:30.810264  293728 cri.go:89] found id: ""
	I1206 10:09:30.810342  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.810364  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:30.810387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:30.810479  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:30.835864  293728 cri.go:89] found id: ""
	I1206 10:09:30.835944  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.835960  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:30.835968  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:30.836050  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:30.860832  293728 cri.go:89] found id: ""
	I1206 10:09:30.860858  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.860867  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:30.860876  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:30.860887  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:30.917397  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:30.917433  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:30.931490  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:30.931572  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:31.004606  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:30.993339    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.994064    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.995768    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.996292    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.997903    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:30.993339    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.994064    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.995768    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.996292    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.997903    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:31.004692  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:31.004725  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:31.033130  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:31.033168  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:33.563282  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:33.574558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:33.574631  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:33.600752  293728 cri.go:89] found id: ""
	I1206 10:09:33.600784  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.600797  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:33.600804  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:33.600876  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:33.626879  293728 cri.go:89] found id: ""
	I1206 10:09:33.626909  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.626919  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:33.626925  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:33.626987  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:33.652921  293728 cri.go:89] found id: ""
	I1206 10:09:33.652945  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.652954  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:33.652960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:33.653025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:33.678584  293728 cri.go:89] found id: ""
	I1206 10:09:33.678619  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.678627  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:33.678634  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:33.678704  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:33.706401  293728 cri.go:89] found id: ""
	I1206 10:09:33.706424  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.706433  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:33.706439  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:33.706514  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:33.754300  293728 cri.go:89] found id: ""
	I1206 10:09:33.754326  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.754334  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:33.754341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:33.754410  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:33.782351  293728 cri.go:89] found id: ""
	I1206 10:09:33.782388  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.782397  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:33.782410  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:33.782479  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:33.809362  293728 cri.go:89] found id: ""
	I1206 10:09:33.809399  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.809407  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:33.809417  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:33.809428  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:33.845485  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:33.845510  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:33.902066  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:33.902106  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:33.915843  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:33.915871  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:33.983566  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:33.974999    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.975872    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977595    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977932    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.979555    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:33.974999    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.975872    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977595    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977932    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.979555    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:33.983589  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:33.983610  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:36.512857  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:36.524687  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:36.524752  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:36.559537  293728 cri.go:89] found id: ""
	I1206 10:09:36.559559  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.559568  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:36.559574  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:36.559641  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:36.584964  293728 cri.go:89] found id: ""
	I1206 10:09:36.585033  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.585049  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:36.585056  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:36.585124  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:36.610724  293728 cri.go:89] found id: ""
	I1206 10:09:36.610750  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.610759  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:36.610765  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:36.610824  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:36.641090  293728 cri.go:89] found id: ""
	I1206 10:09:36.641158  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.641185  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:36.641198  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:36.641287  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:36.665900  293728 cri.go:89] found id: ""
	I1206 10:09:36.665926  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.665935  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:36.665941  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:36.666004  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:36.693620  293728 cri.go:89] found id: ""
	I1206 10:09:36.693650  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.693659  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:36.693666  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:36.693731  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:36.734543  293728 cri.go:89] found id: ""
	I1206 10:09:36.734621  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.734646  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:36.734665  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:36.734757  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:36.776081  293728 cri.go:89] found id: ""
	I1206 10:09:36.776146  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.776168  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:36.776188  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:36.776226  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:36.792679  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:36.792711  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:36.861792  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:36.852688    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.853295    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855348    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855858    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.857501    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:36.852688    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.853295    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855348    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855858    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.857501    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:36.861815  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:36.861828  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:36.887686  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:36.887722  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:36.915203  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:36.915229  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:39.473166  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:39.484986  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:39.485070  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:39.519022  293728 cri.go:89] found id: ""
	I1206 10:09:39.519084  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.519097  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:39.519105  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:39.519183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:39.550949  293728 cri.go:89] found id: ""
	I1206 10:09:39.550987  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.551002  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:39.551009  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:39.551083  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:39.576090  293728 cri.go:89] found id: ""
	I1206 10:09:39.576120  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.576129  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:39.576136  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:39.576199  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:39.602338  293728 cri.go:89] found id: ""
	I1206 10:09:39.602364  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.602374  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:39.602386  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:39.602447  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:39.627803  293728 cri.go:89] found id: ""
	I1206 10:09:39.627841  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.627850  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:39.627857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:39.627929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:39.653348  293728 cri.go:89] found id: ""
	I1206 10:09:39.653376  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.653385  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:39.653392  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:39.653454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:39.679324  293728 cri.go:89] found id: ""
	I1206 10:09:39.679418  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.679434  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:39.679442  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:39.679515  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:39.704684  293728 cri.go:89] found id: ""
	I1206 10:09:39.704708  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.704717  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:39.704726  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:39.704738  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:39.764873  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:39.764905  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:39.779533  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:39.779558  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:39.852807  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:39.844502    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.845176    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.846778    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.847166    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.848722    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:39.844502    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.845176    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.846778    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.847166    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.848722    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:39.852829  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:39.852842  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:39.879753  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:39.879787  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:42.409609  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:42.421328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:42.421397  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:42.447308  293728 cri.go:89] found id: ""
	I1206 10:09:42.447333  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.447342  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:42.447349  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:42.447440  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:42.481946  293728 cri.go:89] found id: ""
	I1206 10:09:42.481977  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.481985  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:42.481992  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:42.482055  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:42.514307  293728 cri.go:89] found id: ""
	I1206 10:09:42.514378  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.514401  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:42.514420  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:42.514512  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:42.546780  293728 cri.go:89] found id: ""
	I1206 10:09:42.546806  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.546815  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:42.546822  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:42.546891  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:42.573407  293728 cri.go:89] found id: ""
	I1206 10:09:42.573430  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.573439  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:42.573445  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:42.573501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:42.599133  293728 cri.go:89] found id: ""
	I1206 10:09:42.599156  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.599164  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:42.599171  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:42.599233  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:42.625000  293728 cri.go:89] found id: ""
	I1206 10:09:42.625028  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.625037  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:42.625043  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:42.625107  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:42.654408  293728 cri.go:89] found id: ""
	I1206 10:09:42.654436  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.654446  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:42.654455  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:42.654467  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:42.711699  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:42.711733  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:42.727806  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:42.727881  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:42.811421  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:42.801418    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.803078    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.804330    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.805386    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.807056    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:42.801418    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.803078    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.804330    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.805386    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.807056    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:42.811446  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:42.811460  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:42.838410  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:42.838445  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:45.369084  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:45.380279  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:45.380388  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:45.405587  293728 cri.go:89] found id: ""
	I1206 10:09:45.405612  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.405621  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:45.405628  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:45.405688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:45.433060  293728 cri.go:89] found id: ""
	I1206 10:09:45.433088  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.433097  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:45.433103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:45.433164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:45.460740  293728 cri.go:89] found id: ""
	I1206 10:09:45.460763  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.460772  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:45.460778  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:45.460837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:45.497706  293728 cri.go:89] found id: ""
	I1206 10:09:45.497771  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.497793  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:45.497813  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:45.497904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:45.534656  293728 cri.go:89] found id: ""
	I1206 10:09:45.534681  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.534690  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:45.534696  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:45.534770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:45.564269  293728 cri.go:89] found id: ""
	I1206 10:09:45.564350  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.564372  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:45.564387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:45.564474  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:45.588438  293728 cri.go:89] found id: ""
	I1206 10:09:45.588517  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.588539  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:45.588558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:45.588651  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:45.613920  293728 cri.go:89] found id: ""
	I1206 10:09:45.613951  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.613960  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:45.613970  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:45.613980  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:45.641788  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:45.641863  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:45.699089  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:45.699123  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:45.712662  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:45.712734  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:45.793739  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:45.785473    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.786020    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.787671    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.788175    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.789766    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:45.785473    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.786020    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.787671    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.788175    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.789766    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:45.793759  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:45.793773  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:48.320858  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:48.331937  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:48.332070  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:48.356716  293728 cri.go:89] found id: ""
	I1206 10:09:48.356784  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.356798  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:48.356806  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:48.356866  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:48.382138  293728 cri.go:89] found id: ""
	I1206 10:09:48.382172  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.382181  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:48.382188  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:48.382258  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:48.408214  293728 cri.go:89] found id: ""
	I1206 10:09:48.408238  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.408247  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:48.408253  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:48.408313  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:48.433328  293728 cri.go:89] found id: ""
	I1206 10:09:48.433351  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.433360  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:48.433366  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:48.433428  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:48.460263  293728 cri.go:89] found id: ""
	I1206 10:09:48.460284  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.460292  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:48.460298  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:48.460355  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:48.488344  293728 cri.go:89] found id: ""
	I1206 10:09:48.488373  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.488381  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:48.488388  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:48.488452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:48.521629  293728 cri.go:89] found id: ""
	I1206 10:09:48.521658  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.521666  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:48.521673  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:48.521759  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:48.549255  293728 cri.go:89] found id: ""
	I1206 10:09:48.549321  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.549344  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:48.549365  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:48.549392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:48.609413  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:48.609450  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:48.623661  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:48.623688  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:48.693637  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:48.684667    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.685431    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687132    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687585    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.689240    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:48.684667    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.685431    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687132    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687585    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.689240    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:48.693661  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:48.693674  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:48.719587  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:48.719660  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:51.258260  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:51.268785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:51.268856  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:51.295768  293728 cri.go:89] found id: ""
	I1206 10:09:51.295793  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.295801  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:51.295808  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:51.295879  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:51.321853  293728 cri.go:89] found id: ""
	I1206 10:09:51.321886  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.321894  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:51.321900  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:51.321968  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:51.347472  293728 cri.go:89] found id: ""
	I1206 10:09:51.347494  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.347502  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:51.347517  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:51.347575  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:51.371656  293728 cri.go:89] found id: ""
	I1206 10:09:51.371683  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.371692  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:51.371698  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:51.371758  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:51.397262  293728 cri.go:89] found id: ""
	I1206 10:09:51.397289  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.397298  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:51.397305  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:51.397409  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:51.423015  293728 cri.go:89] found id: ""
	I1206 10:09:51.423045  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.423061  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:51.423076  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:51.423149  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:51.454355  293728 cri.go:89] found id: ""
	I1206 10:09:51.454381  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.454390  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:51.454396  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:51.454463  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:51.486768  293728 cri.go:89] found id: ""
	I1206 10:09:51.486808  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.486823  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:51.486832  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:51.486843  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:51.554153  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:51.554192  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:51.568560  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:51.568590  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:51.634642  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:51.626552    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.627100    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.628640    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.629103    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.630610    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:51.626552    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.627100    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.628640    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.629103    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.630610    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:51.634664  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:51.634678  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:51.660429  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:51.660463  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:54.188738  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:54.201905  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:54.201981  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:54.227986  293728 cri.go:89] found id: ""
	I1206 10:09:54.228012  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.228021  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:54.228028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:54.228113  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:54.254201  293728 cri.go:89] found id: ""
	I1206 10:09:54.254235  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.254245  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:54.254283  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:54.254395  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:54.278782  293728 cri.go:89] found id: ""
	I1206 10:09:54.278820  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.278830  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:54.278852  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:54.278935  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:54.303206  293728 cri.go:89] found id: ""
	I1206 10:09:54.303240  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.303249  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:54.303256  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:54.303323  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:54.328700  293728 cri.go:89] found id: ""
	I1206 10:09:54.328726  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.328735  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:54.328741  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:54.328818  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:54.352531  293728 cri.go:89] found id: ""
	I1206 10:09:54.352613  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.352638  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:54.352656  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:54.352746  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:54.381751  293728 cri.go:89] found id: ""
	I1206 10:09:54.381785  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.381795  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:54.381802  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:54.381873  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:54.410917  293728 cri.go:89] found id: ""
	I1206 10:09:54.410993  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.411015  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:54.411037  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:54.411076  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:54.440257  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:54.440285  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:54.500235  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:54.500278  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:54.515938  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:54.515966  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:54.588801  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:54.579599    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.580550    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582125    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582602    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.584281    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:54.579599    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.580550    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582125    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582602    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.584281    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:54.588823  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:54.588836  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:57.116312  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:57.127033  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:57.127111  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:57.152251  293728 cri.go:89] found id: ""
	I1206 10:09:57.152273  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.152282  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:57.152288  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:57.152346  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:57.176684  293728 cri.go:89] found id: ""
	I1206 10:09:57.176758  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.176773  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:57.176781  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:57.176840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:57.202374  293728 cri.go:89] found id: ""
	I1206 10:09:57.202436  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.202470  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:57.202494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:57.202580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:57.227547  293728 cri.go:89] found id: ""
	I1206 10:09:57.227573  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.227582  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:57.227589  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:57.227650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:57.253673  293728 cri.go:89] found id: ""
	I1206 10:09:57.253705  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.253714  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:57.253721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:57.253789  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:57.278618  293728 cri.go:89] found id: ""
	I1206 10:09:57.278644  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.278654  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:57.278660  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:57.278722  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:57.304336  293728 cri.go:89] found id: ""
	I1206 10:09:57.304384  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.304397  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:57.304423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:57.304508  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:57.334469  293728 cri.go:89] found id: ""
	I1206 10:09:57.334492  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.334500  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:57.334508  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:57.334520  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:57.348891  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:57.348922  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:57.415906  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:57.407558    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.408081    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.409719    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.410287    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.411964    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:57.407558    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.408081    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.409719    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.410287    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.411964    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:57.415927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:57.415939  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:57.441880  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:57.441918  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:57.475269  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:57.475297  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:00.036981  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:00.091003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:00.091183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:00.199598  293728 cri.go:89] found id: ""
	I1206 10:10:00.199642  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.199652  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:00.199660  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:00.199761  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:00.291513  293728 cri.go:89] found id: ""
	I1206 10:10:00.291550  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.291562  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:00.291569  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:00.291653  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:00.363428  293728 cri.go:89] found id: ""
	I1206 10:10:00.363514  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.363541  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:00.363559  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:00.363706  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:00.471969  293728 cri.go:89] found id: ""
	I1206 10:10:00.471994  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.472004  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:00.472013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:00.472080  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:00.548937  293728 cri.go:89] found id: ""
	I1206 10:10:00.548960  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.548969  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:00.548976  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:00.549039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:00.612750  293728 cri.go:89] found id: ""
	I1206 10:10:00.612774  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.612783  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:00.612790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:00.612857  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:00.648024  293728 cri.go:89] found id: ""
	I1206 10:10:00.648051  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.648061  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:00.648068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:00.648145  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:00.678506  293728 cri.go:89] found id: ""
	I1206 10:10:00.678587  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.678615  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:00.678636  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:00.678671  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:00.755139  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:00.755237  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:00.771588  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:00.771629  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:00.849622  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:00.840203    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.840934    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.842739    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.843443    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.845027    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:00.840203    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.840934    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.842739    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.843443    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.845027    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:00.849656  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:00.849669  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:00.876546  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:00.876583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:03.409148  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:03.420472  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:03.420547  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:03.449464  293728 cri.go:89] found id: ""
	I1206 10:10:03.449487  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.449496  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:03.449521  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:03.449598  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:03.482241  293728 cri.go:89] found id: ""
	I1206 10:10:03.482267  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.482276  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:03.482286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:03.482349  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:03.512048  293728 cri.go:89] found id: ""
	I1206 10:10:03.512075  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.512084  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:03.512090  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:03.512153  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:03.544039  293728 cri.go:89] found id: ""
	I1206 10:10:03.544064  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.544073  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:03.544080  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:03.544159  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:03.568866  293728 cri.go:89] found id: ""
	I1206 10:10:03.568942  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.568966  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:03.568978  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:03.569071  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:03.595896  293728 cri.go:89] found id: ""
	I1206 10:10:03.595930  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.595940  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:03.595946  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:03.596020  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:03.620834  293728 cri.go:89] found id: ""
	I1206 10:10:03.620863  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.620871  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:03.620878  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:03.620950  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:03.644327  293728 cri.go:89] found id: ""
	I1206 10:10:03.644359  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.644368  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:03.644377  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:03.644392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:03.707856  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:03.699517    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.700161    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.701732    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.702251    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.703903    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:03.699517    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.700161    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.701732    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.702251    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.703903    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:03.707879  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:03.707891  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:03.735529  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:03.735562  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:03.767489  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:03.767516  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:03.831889  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:03.831926  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:06.346582  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:06.357845  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:06.357929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:06.387151  293728 cri.go:89] found id: ""
	I1206 10:10:06.387176  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.387185  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:06.387192  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:06.387256  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:06.413165  293728 cri.go:89] found id: ""
	I1206 10:10:06.413194  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.413203  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:06.413210  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:06.413271  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:06.437677  293728 cri.go:89] found id: ""
	I1206 10:10:06.437701  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.437710  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:06.437716  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:06.437772  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:06.463040  293728 cri.go:89] found id: ""
	I1206 10:10:06.463070  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.463080  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:06.463087  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:06.463150  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:06.494675  293728 cri.go:89] found id: ""
	I1206 10:10:06.494751  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.494774  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:06.494794  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:06.494889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:06.526246  293728 cri.go:89] found id: ""
	I1206 10:10:06.526316  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.526337  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:06.526357  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:06.526440  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:06.559804  293728 cri.go:89] found id: ""
	I1206 10:10:06.559829  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.559839  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:06.559845  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:06.559907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:06.589855  293728 cri.go:89] found id: ""
	I1206 10:10:06.589930  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.589964  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:06.590003  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:06.590032  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:06.616596  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:06.616632  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:06.646994  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:06.647021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:06.702957  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:06.702993  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:06.716751  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:06.716778  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:06.798071  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:06.789752    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.790292    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.791805    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.792344    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.793982    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:06.789752    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.790292    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.791805    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.792344    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.793982    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:09.298347  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:09.308960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:09.309035  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:09.333650  293728 cri.go:89] found id: ""
	I1206 10:10:09.333675  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.333683  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:09.333690  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:09.333767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:09.357861  293728 cri.go:89] found id: ""
	I1206 10:10:09.357885  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.357894  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:09.357900  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:09.358010  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:09.382744  293728 cri.go:89] found id: ""
	I1206 10:10:09.382770  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.382779  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:09.382785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:09.382878  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:09.413180  293728 cri.go:89] found id: ""
	I1206 10:10:09.413259  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.413282  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:09.413295  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:09.413376  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:09.438201  293728 cri.go:89] found id: ""
	I1206 10:10:09.438227  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.438235  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:09.438242  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:09.438300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:09.462981  293728 cri.go:89] found id: ""
	I1206 10:10:09.463058  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.463084  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:09.463103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:09.463199  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:09.489818  293728 cri.go:89] found id: ""
	I1206 10:10:09.489840  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.489849  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:09.489855  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:09.489914  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:09.517662  293728 cri.go:89] found id: ""
	I1206 10:10:09.517689  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.517698  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:09.517707  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:09.517719  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:09.576466  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:09.576502  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:09.590374  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:09.590401  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:09.655862  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:09.646406    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.646998    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.648878    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.649656    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.651513    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:09.646406    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.646998    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.648878    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.649656    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.651513    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:09.655883  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:09.655895  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:09.681441  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:09.681477  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:12.211127  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:12.222215  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:12.222285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:12.247472  293728 cri.go:89] found id: ""
	I1206 10:10:12.247547  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.247562  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:12.247573  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:12.247633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:12.272505  293728 cri.go:89] found id: ""
	I1206 10:10:12.272533  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.272543  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:12.272550  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:12.272638  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:12.297673  293728 cri.go:89] found id: ""
	I1206 10:10:12.297698  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.297707  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:12.297715  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:12.297830  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:12.322568  293728 cri.go:89] found id: ""
	I1206 10:10:12.322609  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.322618  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:12.322625  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:12.322701  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:12.349304  293728 cri.go:89] found id: ""
	I1206 10:10:12.349331  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.349341  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:12.349347  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:12.349443  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:12.375736  293728 cri.go:89] found id: ""
	I1206 10:10:12.375762  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.375771  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:12.375778  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:12.375840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:12.400942  293728 cri.go:89] found id: ""
	I1206 10:10:12.400966  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.400974  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:12.400981  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:12.401040  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:12.426874  293728 cri.go:89] found id: ""
	I1206 10:10:12.426916  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.426926  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:12.426936  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:12.426948  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:12.484510  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:12.484587  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:12.499107  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:12.499186  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:12.572427  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:12.563920    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.564850    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566425    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566780    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.568265    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:12.563920    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.564850    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566425    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566780    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.568265    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:12.572450  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:12.572466  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:12.598814  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:12.598849  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:15.128638  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:15.139805  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:15.139876  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:15.165109  293728 cri.go:89] found id: ""
	I1206 10:10:15.165133  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.165149  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:15.165156  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:15.165219  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:15.196948  293728 cri.go:89] found id: ""
	I1206 10:10:15.196974  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.196982  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:15.196989  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:15.197059  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:15.222058  293728 cri.go:89] found id: ""
	I1206 10:10:15.222082  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.222090  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:15.222096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:15.222155  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:15.248215  293728 cri.go:89] found id: ""
	I1206 10:10:15.248238  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.248247  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:15.248254  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:15.248312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:15.273082  293728 cri.go:89] found id: ""
	I1206 10:10:15.273104  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.273113  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:15.273120  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:15.273179  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:15.298006  293728 cri.go:89] found id: ""
	I1206 10:10:15.298029  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.298037  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:15.298043  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:15.298101  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:15.322519  293728 cri.go:89] found id: ""
	I1206 10:10:15.322542  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.322550  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:15.322557  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:15.322615  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:15.347746  293728 cri.go:89] found id: ""
	I1206 10:10:15.347770  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.347778  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:15.347786  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:15.347797  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:15.361534  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:15.361561  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:15.427348  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:15.418245    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.419137    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421066    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421690    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.423366    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:15.418245    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.419137    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421066    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421690    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.423366    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:15.427370  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:15.427404  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:15.453826  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:15.453864  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:15.487015  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:15.487049  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:18.053317  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:18.064493  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:18.064566  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:18.089748  293728 cri.go:89] found id: ""
	I1206 10:10:18.089773  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.089782  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:18.089789  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:18.089850  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:18.116011  293728 cri.go:89] found id: ""
	I1206 10:10:18.116039  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.116048  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:18.116055  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:18.116116  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:18.146676  293728 cri.go:89] found id: ""
	I1206 10:10:18.146701  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.146710  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:18.146716  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:18.146783  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:18.172596  293728 cri.go:89] found id: ""
	I1206 10:10:18.172621  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.172631  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:18.172643  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:18.172703  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:18.198506  293728 cri.go:89] found id: ""
	I1206 10:10:18.198584  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.198608  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:18.198630  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:18.198747  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:18.230708  293728 cri.go:89] found id: ""
	I1206 10:10:18.230786  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.230812  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:18.230830  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:18.230955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:18.257169  293728 cri.go:89] found id: ""
	I1206 10:10:18.257235  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.257250  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:18.257257  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:18.257317  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:18.285950  293728 cri.go:89] found id: ""
	I1206 10:10:18.285976  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.285985  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:18.285994  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:18.286006  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:18.318446  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:18.318471  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:18.379191  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:18.379227  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:18.393268  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:18.393295  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:18.458997  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:18.449882    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.450796    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452543    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452857    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.454349    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:18.449882    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.450796    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452543    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452857    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.454349    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:18.459023  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:18.459035  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:20.987221  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:20.999561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:20.999633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:21.037750  293728 cri.go:89] found id: ""
	I1206 10:10:21.037771  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.037780  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:21.037786  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:21.037846  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:21.063327  293728 cri.go:89] found id: ""
	I1206 10:10:21.063350  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.063358  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:21.063364  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:21.063448  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:21.088200  293728 cri.go:89] found id: ""
	I1206 10:10:21.088223  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.088231  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:21.088237  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:21.088298  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:21.118025  293728 cri.go:89] found id: ""
	I1206 10:10:21.118051  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.118061  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:21.118068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:21.118126  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:21.143740  293728 cri.go:89] found id: ""
	I1206 10:10:21.143770  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.143779  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:21.143785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:21.143848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:21.169323  293728 cri.go:89] found id: ""
	I1206 10:10:21.169401  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.169417  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:21.169424  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:21.169501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:21.194291  293728 cri.go:89] found id: ""
	I1206 10:10:21.194356  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.194380  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:21.194398  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:21.194490  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:21.219471  293728 cri.go:89] found id: ""
	I1206 10:10:21.219599  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.219653  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:21.219679  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:21.219706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:21.277216  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:21.277252  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:21.291736  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:21.291766  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:21.366215  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:21.357353    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.358173    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.359989    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.360738    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.362264    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:21.357353    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.358173    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.359989    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.360738    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.362264    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:21.366236  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:21.366250  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:21.392405  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:21.392437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:23.923653  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:23.934595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:23.934670  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:23.961107  293728 cri.go:89] found id: ""
	I1206 10:10:23.961130  293728 logs.go:282] 0 containers: []
	W1206 10:10:23.961138  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:23.961145  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:23.961209  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:23.994692  293728 cri.go:89] found id: ""
	I1206 10:10:23.994729  293728 logs.go:282] 0 containers: []
	W1206 10:10:23.994739  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:23.994745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:23.994817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:24.028605  293728 cri.go:89] found id: ""
	I1206 10:10:24.028689  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.028715  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:24.028735  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:24.028848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:24.057290  293728 cri.go:89] found id: ""
	I1206 10:10:24.057317  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.057326  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:24.057333  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:24.057400  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:24.085994  293728 cri.go:89] found id: ""
	I1206 10:10:24.086029  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.086039  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:24.086045  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:24.086128  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:24.112798  293728 cri.go:89] found id: ""
	I1206 10:10:24.112826  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.112835  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:24.112841  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:24.112930  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:24.139149  293728 cri.go:89] found id: ""
	I1206 10:10:24.139175  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.139184  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:24.139190  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:24.139300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:24.165213  293728 cri.go:89] found id: ""
	I1206 10:10:24.165239  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.165248  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:24.165257  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:24.165268  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:24.223441  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:24.223477  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:24.237256  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:24.237282  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:24.303131  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:24.295355    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.295806    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297324    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297646    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.299178    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:24.295355    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.295806    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297324    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297646    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.299178    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:24.303154  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:24.303170  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:24.329120  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:24.329160  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:26.857977  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:26.868844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:26.868920  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:26.893530  293728 cri.go:89] found id: ""
	I1206 10:10:26.893555  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.893563  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:26.893569  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:26.893628  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:26.922692  293728 cri.go:89] found id: ""
	I1206 10:10:26.922718  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.922727  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:26.922733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:26.922794  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:26.948535  293728 cri.go:89] found id: ""
	I1206 10:10:26.948560  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.948569  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:26.948575  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:26.948640  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:26.976097  293728 cri.go:89] found id: ""
	I1206 10:10:26.976167  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.976193  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:26.976212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:26.976300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:27.010083  293728 cri.go:89] found id: ""
	I1206 10:10:27.010161  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.010184  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:27.010229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:27.010333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:27.038839  293728 cri.go:89] found id: ""
	I1206 10:10:27.038913  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.038934  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:27.038954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:27.039084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:27.066982  293728 cri.go:89] found id: ""
	I1206 10:10:27.067063  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.067086  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:27.067105  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:27.067216  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:27.092863  293728 cri.go:89] found id: ""
	I1206 10:10:27.092891  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.092899  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:27.092909  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:27.092950  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:27.120341  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:27.120375  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:27.177452  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:27.177489  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:27.191505  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:27.191533  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:27.260108  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:27.251592    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.252285    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.253999    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.254325    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.255968    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:27.251592    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.252285    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.253999    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.254325    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.255968    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:27.260129  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:27.260141  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:29.785293  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:29.795873  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:29.795947  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:29.826896  293728 cri.go:89] found id: ""
	I1206 10:10:29.826934  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.826944  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:29.826950  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:29.827093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:29.857768  293728 cri.go:89] found id: ""
	I1206 10:10:29.857793  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.857803  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:29.857809  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:29.857881  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:29.885651  293728 cri.go:89] found id: ""
	I1206 10:10:29.885686  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.885696  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:29.885721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:29.885805  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:29.910764  293728 cri.go:89] found id: ""
	I1206 10:10:29.910892  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.910916  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:29.910928  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:29.911014  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:29.937166  293728 cri.go:89] found id: ""
	I1206 10:10:29.937191  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.937201  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:29.937208  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:29.937270  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:29.962684  293728 cri.go:89] found id: ""
	I1206 10:10:29.962717  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.962726  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:29.962733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:29.962799  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:29.993702  293728 cri.go:89] found id: ""
	I1206 10:10:29.993776  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.993799  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:29.993818  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:29.993904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:30.061338  293728 cri.go:89] found id: ""
	I1206 10:10:30.061423  293728 logs.go:282] 0 containers: []
	W1206 10:10:30.061447  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:30.061482  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:30.061514  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:30.110307  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:30.110344  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:30.178825  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:30.178864  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:30.194614  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:30.194641  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:30.269484  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:30.258437    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.259022    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.261951    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263145    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263843    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:30.258437    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.259022    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.261951    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263145    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263843    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:30.269507  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:30.269521  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:32.796483  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:32.807219  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:32.807347  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:32.832338  293728 cri.go:89] found id: ""
	I1206 10:10:32.832365  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.832374  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:32.832381  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:32.832443  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:32.857737  293728 cri.go:89] found id: ""
	I1206 10:10:32.857763  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.857771  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:32.857780  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:32.857840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:32.886514  293728 cri.go:89] found id: ""
	I1206 10:10:32.886537  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.886546  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:32.886553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:32.886622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:32.916133  293728 cri.go:89] found id: ""
	I1206 10:10:32.916157  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.916166  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:32.916172  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:32.916278  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:32.940460  293728 cri.go:89] found id: ""
	I1206 10:10:32.940485  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.940493  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:32.940500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:32.940580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:32.967101  293728 cri.go:89] found id: ""
	I1206 10:10:32.967129  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.967139  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:32.967146  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:32.967255  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:33.003657  293728 cri.go:89] found id: ""
	I1206 10:10:33.003687  293728 logs.go:282] 0 containers: []
	W1206 10:10:33.003696  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:33.003703  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:33.003817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:33.034541  293728 cri.go:89] found id: ""
	I1206 10:10:33.034570  293728 logs.go:282] 0 containers: []
	W1206 10:10:33.034579  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:33.034587  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:33.034599  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:33.103182  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:33.094513    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.095149    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.096956    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.097426    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.099078    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:33.094513    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.095149    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.096956    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.097426    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.099078    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:33.103205  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:33.103219  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:33.129473  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:33.129508  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:33.158555  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:33.158583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:33.216375  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:33.216409  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:35.730137  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:35.743050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:35.743211  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:35.782795  293728 cri.go:89] found id: ""
	I1206 10:10:35.782873  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.782897  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:35.782917  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:35.783049  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:35.810026  293728 cri.go:89] found id: ""
	I1206 10:10:35.810102  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.810126  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:35.810144  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:35.810234  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:35.835162  293728 cri.go:89] found id: ""
	I1206 10:10:35.835240  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.835265  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:35.835286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:35.835412  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:35.860195  293728 cri.go:89] found id: ""
	I1206 10:10:35.860227  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.860236  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:35.860247  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:35.860386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:35.886939  293728 cri.go:89] found id: ""
	I1206 10:10:35.886977  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.886995  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:35.887003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:35.887093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:35.917822  293728 cri.go:89] found id: ""
	I1206 10:10:35.917848  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.917858  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:35.917864  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:35.917944  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:35.945452  293728 cri.go:89] found id: ""
	I1206 10:10:35.945478  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.945488  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:35.945494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:35.945556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:35.986146  293728 cri.go:89] found id: ""
	I1206 10:10:35.986174  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.986183  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:35.986193  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:35.986204  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:36.053722  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:36.053759  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:36.068786  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:36.068815  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:36.132981  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:36.124259    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.124911    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.126650    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.127348    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.128990    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:36.124259    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.124911    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.126650    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.127348    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.128990    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:36.133005  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:36.133018  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:36.158971  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:36.159009  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:38.688989  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:38.699954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:38.700025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:38.732646  293728 cri.go:89] found id: ""
	I1206 10:10:38.732680  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.732689  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:38.732696  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:38.732757  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:38.760849  293728 cri.go:89] found id: ""
	I1206 10:10:38.760878  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.760888  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:38.760894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:38.760952  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:38.793233  293728 cri.go:89] found id: ""
	I1206 10:10:38.793258  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.793267  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:38.793274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:38.793355  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:38.818786  293728 cri.go:89] found id: ""
	I1206 10:10:38.818814  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.818823  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:38.818831  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:38.818925  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:38.845346  293728 cri.go:89] found id: ""
	I1206 10:10:38.845373  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.845382  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:38.845388  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:38.845449  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:38.876064  293728 cri.go:89] found id: ""
	I1206 10:10:38.876088  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.876097  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:38.876103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:38.876193  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:38.901010  293728 cri.go:89] found id: ""
	I1206 10:10:38.901037  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.901046  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:38.901053  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:38.901121  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:38.931159  293728 cri.go:89] found id: ""
	I1206 10:10:38.931185  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.931194  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:38.931203  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:38.931214  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:38.945219  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:38.945247  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:39.040279  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:39.031608    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.032449    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034282    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034607    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.036094    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:39.031608    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.032449    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034282    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034607    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.036094    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:39.040303  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:39.040315  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:39.069669  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:39.069709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:39.102102  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:39.102133  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:41.662114  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:41.674379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:41.674461  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:41.700812  293728 cri.go:89] found id: ""
	I1206 10:10:41.700836  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.700846  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:41.700852  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:41.700945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:41.732717  293728 cri.go:89] found id: ""
	I1206 10:10:41.732744  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.732753  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:41.732759  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:41.732818  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:41.765582  293728 cri.go:89] found id: ""
	I1206 10:10:41.765609  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.765618  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:41.765624  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:41.765684  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:41.795133  293728 cri.go:89] found id: ""
	I1206 10:10:41.795160  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.795169  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:41.795178  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:41.795240  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:41.824848  293728 cri.go:89] found id: ""
	I1206 10:10:41.824876  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.824885  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:41.824894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:41.825002  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:41.850710  293728 cri.go:89] found id: ""
	I1206 10:10:41.850738  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.850748  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:41.850754  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:41.850817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:41.876689  293728 cri.go:89] found id: ""
	I1206 10:10:41.876714  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.876723  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:41.876730  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:41.876837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:41.910933  293728 cri.go:89] found id: ""
	I1206 10:10:41.910958  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.910967  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:41.910977  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:41.910988  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:41.940383  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:41.940411  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:42.002369  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:42.002465  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:42.036193  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:42.036220  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:42.116431  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:42.104500    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.106090    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.107160    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.108051    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.110987    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:42.104500    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.106090    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.107160    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.108051    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.110987    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:42.116466  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:42.116485  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:44.645750  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:44.657010  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:44.657087  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:44.681487  293728 cri.go:89] found id: ""
	I1206 10:10:44.681511  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.681520  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:44.681526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:44.681632  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:44.707007  293728 cri.go:89] found id: ""
	I1206 10:10:44.707032  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.707059  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:44.707065  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:44.707124  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:44.740358  293728 cri.go:89] found id: ""
	I1206 10:10:44.740384  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.740394  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:44.740400  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:44.740462  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:44.774979  293728 cri.go:89] found id: ""
	I1206 10:10:44.775005  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.775013  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:44.775020  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:44.775099  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:44.802733  293728 cri.go:89] found id: ""
	I1206 10:10:44.802759  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.802768  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:44.802774  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:44.802836  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:44.830059  293728 cri.go:89] found id: ""
	I1206 10:10:44.830082  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.830091  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:44.830104  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:44.830164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:44.857962  293728 cri.go:89] found id: ""
	I1206 10:10:44.857988  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.857997  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:44.858003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:44.858062  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:44.882971  293728 cri.go:89] found id: ""
	I1206 10:10:44.882993  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.883002  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:44.883011  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:44.883021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:44.939214  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:44.939249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:44.953046  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:44.953074  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:45.078537  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:45.068034    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069216    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069914    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.072098    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.073533    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:45.068034    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069216    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069914    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.072098    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.073533    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:45.078570  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:45.078586  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:45.108352  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:45.108392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:47.660188  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:47.670914  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:47.670992  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:47.695337  293728 cri.go:89] found id: ""
	I1206 10:10:47.695363  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.695417  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:47.695425  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:47.695496  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:47.728763  293728 cri.go:89] found id: ""
	I1206 10:10:47.728834  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.728855  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:47.728877  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:47.728982  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:47.755564  293728 cri.go:89] found id: ""
	I1206 10:10:47.755640  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.755663  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:47.755683  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:47.755794  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:47.786763  293728 cri.go:89] found id: ""
	I1206 10:10:47.786838  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.786869  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:47.786892  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:47.786999  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:47.813109  293728 cri.go:89] found id: ""
	I1206 10:10:47.813187  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.813209  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:47.813227  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:47.813312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:47.839872  293728 cri.go:89] found id: ""
	I1206 10:10:47.839947  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.839963  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:47.839971  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:47.840029  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:47.864803  293728 cri.go:89] found id: ""
	I1206 10:10:47.864827  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.864835  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:47.864842  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:47.864908  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:47.893715  293728 cri.go:89] found id: ""
	I1206 10:10:47.893740  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.893749  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:47.893759  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:47.893770  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:47.962240  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:47.954010    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.954579    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956159    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956626    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.958129    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:47.954010    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.954579    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956159    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956626    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.958129    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:47.962263  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:47.962275  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:47.988774  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:47.988808  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:48.022271  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:48.022301  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:48.088564  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:48.088601  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:50.605005  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:50.615765  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:50.615847  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:50.641365  293728 cri.go:89] found id: ""
	I1206 10:10:50.641389  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.641397  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:50.641404  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:50.641468  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:50.665749  293728 cri.go:89] found id: ""
	I1206 10:10:50.665775  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.665784  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:50.665790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:50.665848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:50.693092  293728 cri.go:89] found id: ""
	I1206 10:10:50.693117  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.693133  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:50.693139  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:50.693198  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:50.721292  293728 cri.go:89] found id: ""
	I1206 10:10:50.721319  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.721328  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:50.721335  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:50.721394  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:50.757580  293728 cri.go:89] found id: ""
	I1206 10:10:50.757608  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.757617  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:50.757623  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:50.757681  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:50.795246  293728 cri.go:89] found id: ""
	I1206 10:10:50.795275  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.795284  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:50.795290  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:50.795352  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:50.831466  293728 cri.go:89] found id: ""
	I1206 10:10:50.831489  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.831497  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:50.831503  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:50.831563  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:50.856692  293728 cri.go:89] found id: ""
	I1206 10:10:50.856719  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.856728  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:50.856737  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:50.856748  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:50.914369  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:50.914404  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:50.928218  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:50.928249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:51.001552  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:50.990416    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.991460    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.992543    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.993284    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.996113    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:50.990416    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.991460    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.992543    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.993284    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.996113    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:51.001649  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:51.001679  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:51.035670  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:51.035706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:53.568268  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:53.579523  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:53.579600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:53.605604  293728 cri.go:89] found id: ""
	I1206 10:10:53.605626  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.605636  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:53.605642  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:53.605704  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:53.632535  293728 cri.go:89] found id: ""
	I1206 10:10:53.632558  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.632566  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:53.632573  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:53.632633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:53.664459  293728 cri.go:89] found id: ""
	I1206 10:10:53.664485  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.664494  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:53.664500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:53.664561  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:53.689200  293728 cri.go:89] found id: ""
	I1206 10:10:53.689227  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.689235  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:53.689242  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:53.689303  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:53.724364  293728 cri.go:89] found id: ""
	I1206 10:10:53.724391  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.724401  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:53.724408  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:53.724489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:53.760957  293728 cri.go:89] found id: ""
	I1206 10:10:53.760985  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.760995  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:53.761002  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:53.761065  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:53.795256  293728 cri.go:89] found id: ""
	I1206 10:10:53.795417  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.795469  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:53.795490  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:53.795618  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:53.820946  293728 cri.go:89] found id: ""
	I1206 10:10:53.821014  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.821028  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:53.821038  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:53.821049  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:53.850603  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:53.850632  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:53.910568  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:53.910606  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:53.924408  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:53.924435  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:53.993865  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:53.984800    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.985669    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987623    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987938    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.989469    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:53.984800    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.985669    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987623    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987938    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.989469    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:53.993926  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:53.993964  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:56.525953  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:56.537170  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:56.537251  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:56.562800  293728 cri.go:89] found id: ""
	I1206 10:10:56.562825  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.562834  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:56.562841  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:56.562903  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:56.589000  293728 cri.go:89] found id: ""
	I1206 10:10:56.589032  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.589042  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:56.589048  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:56.589108  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:56.613252  293728 cri.go:89] found id: ""
	I1206 10:10:56.613276  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.613284  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:56.613291  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:56.613354  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:56.643136  293728 cri.go:89] found id: ""
	I1206 10:10:56.643176  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.643186  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:56.643193  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:56.643265  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:56.669515  293728 cri.go:89] found id: ""
	I1206 10:10:56.669539  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.669547  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:56.669554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:56.669613  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:56.694989  293728 cri.go:89] found id: ""
	I1206 10:10:56.695013  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.695022  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:56.695028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:56.695295  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:56.733872  293728 cri.go:89] found id: ""
	I1206 10:10:56.733898  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.733907  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:56.733914  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:56.733981  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:56.768700  293728 cri.go:89] found id: ""
	I1206 10:10:56.768725  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.768734  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:56.768745  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:56.768765  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:56.801786  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:56.801812  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:56.857425  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:56.857458  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:56.870898  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:56.870929  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:56.939737  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:56.930826    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.931761    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933321    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933912    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.935699    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:56.930826    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.931761    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933321    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933912    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.935699    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:56.939814  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:56.939833  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:59.467303  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:59.479788  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:59.479913  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:59.507178  293728 cri.go:89] found id: ""
	I1206 10:10:59.507214  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.507223  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:59.507229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:59.507307  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:59.532362  293728 cri.go:89] found id: ""
	I1206 10:10:59.532435  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.532460  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:59.532478  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:59.532565  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:59.561793  293728 cri.go:89] found id: ""
	I1206 10:10:59.561869  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.561893  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:59.561912  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:59.562006  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:59.587885  293728 cri.go:89] found id: ""
	I1206 10:10:59.587914  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.587933  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:59.587955  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:59.588043  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:59.616632  293728 cri.go:89] found id: ""
	I1206 10:10:59.616701  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.616723  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:59.616741  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:59.616828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:59.641907  293728 cri.go:89] found id: ""
	I1206 10:10:59.641942  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.641950  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:59.641957  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:59.642030  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:59.666146  293728 cri.go:89] found id: ""
	I1206 10:10:59.666181  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.666190  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:59.666197  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:59.666267  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:59.690454  293728 cri.go:89] found id: ""
	I1206 10:10:59.690525  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.690549  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:59.690571  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:59.690606  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:59.747565  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:59.747602  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:59.761979  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:59.762033  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:59.832718  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:59.824094    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825243    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825921    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827020    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827705    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:59.824094    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825243    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825921    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827020    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827705    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:59.832743  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:59.832755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:59.858330  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:59.858360  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:02.390395  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:02.401485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:02.401558  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:02.427611  293728 cri.go:89] found id: ""
	I1206 10:11:02.427638  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.427647  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:02.427654  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:02.427729  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:02.454049  293728 cri.go:89] found id: ""
	I1206 10:11:02.454078  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.454087  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:02.454093  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:02.454154  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:02.480392  293728 cri.go:89] found id: ""
	I1206 10:11:02.480417  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.480425  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:02.480431  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:02.480489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:02.506546  293728 cri.go:89] found id: ""
	I1206 10:11:02.506572  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.506581  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:02.506587  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:02.506647  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:02.531917  293728 cri.go:89] found id: ""
	I1206 10:11:02.531954  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.531963  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:02.531979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:02.532097  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:02.559738  293728 cri.go:89] found id: ""
	I1206 10:11:02.559759  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.559768  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:02.559774  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:02.559834  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:02.584556  293728 cri.go:89] found id: ""
	I1206 10:11:02.584578  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.584587  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:02.584593  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:02.584652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:02.617108  293728 cri.go:89] found id: ""
	I1206 10:11:02.617164  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.617174  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:02.617183  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:02.617199  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:02.645764  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:02.645802  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:02.675285  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:02.675317  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:02.733222  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:02.733262  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:02.747026  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:02.747069  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:02.827017  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:02.817993    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.818819    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.820650    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.821248    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.822937    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:02.817993    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.818819    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.820650    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.821248    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.822937    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:05.327889  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:05.338718  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:05.338812  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:05.363857  293728 cri.go:89] found id: ""
	I1206 10:11:05.363882  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.363892  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:05.363899  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:05.363969  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:05.389419  293728 cri.go:89] found id: ""
	I1206 10:11:05.389444  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.389453  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:05.389462  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:05.389522  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:05.416875  293728 cri.go:89] found id: ""
	I1206 10:11:05.416937  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.416952  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:05.416960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:05.417018  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:05.445294  293728 cri.go:89] found id: ""
	I1206 10:11:05.445316  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.445325  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:05.445331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:05.445389  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:05.469930  293728 cri.go:89] found id: ""
	I1206 10:11:05.469952  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.469960  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:05.469966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:05.470023  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:05.494527  293728 cri.go:89] found id: ""
	I1206 10:11:05.494591  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.494623  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:05.494641  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:05.494712  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:05.519703  293728 cri.go:89] found id: ""
	I1206 10:11:05.519727  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.519736  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:05.519742  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:05.519802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:05.544697  293728 cri.go:89] found id: ""
	I1206 10:11:05.544721  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.544729  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:05.544738  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:05.544751  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:05.558261  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:05.558288  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:05.627696  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:05.618572   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.619577   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.621405   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.622011   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.623059   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:05.618572   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.619577   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.621405   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.622011   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.623059   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:05.627760  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:05.627781  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:05.653464  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:05.653499  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:05.684619  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:05.684647  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:08.247509  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:08.260609  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:08.260730  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:08.289483  293728 cri.go:89] found id: ""
	I1206 10:11:08.289551  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.289567  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:08.289580  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:08.289640  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:08.318013  293728 cri.go:89] found id: ""
	I1206 10:11:08.318037  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.318045  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:08.318051  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:08.318110  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:08.351762  293728 cri.go:89] found id: ""
	I1206 10:11:08.351785  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.351794  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:08.351800  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:08.351858  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:08.377083  293728 cri.go:89] found id: ""
	I1206 10:11:08.377159  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.377174  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:08.377181  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:08.377240  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:08.406041  293728 cri.go:89] found id: ""
	I1206 10:11:08.406063  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.406072  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:08.406077  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:08.406135  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:08.430970  293728 cri.go:89] found id: ""
	I1206 10:11:08.430996  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.431004  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:08.431011  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:08.431096  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:08.454833  293728 cri.go:89] found id: ""
	I1206 10:11:08.454857  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.454865  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:08.454872  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:08.454931  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:08.484046  293728 cri.go:89] found id: ""
	I1206 10:11:08.484113  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.484129  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:08.484139  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:08.484150  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:08.551224  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:08.542554   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.543265   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545049   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545727   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.547350   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:08.542554   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.543265   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545049   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545727   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.547350   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:08.551247  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:08.551259  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:08.577706  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:08.577740  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:08.605435  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:08.605462  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:08.665984  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:08.666020  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:11.180758  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:11.193428  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:11.193501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:11.230343  293728 cri.go:89] found id: ""
	I1206 10:11:11.230374  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.230383  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:11.230389  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:11.230452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:11.267153  293728 cri.go:89] found id: ""
	I1206 10:11:11.267177  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.267187  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:11.267193  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:11.267258  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:11.299679  293728 cri.go:89] found id: ""
	I1206 10:11:11.299708  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.299718  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:11.299724  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:11.299784  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:11.325476  293728 cri.go:89] found id: ""
	I1206 10:11:11.325503  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.325512  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:11.325518  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:11.325600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:11.351586  293728 cri.go:89] found id: ""
	I1206 10:11:11.351614  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.351624  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:11.351632  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:11.351700  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:11.377176  293728 cri.go:89] found id: ""
	I1206 10:11:11.377203  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.377212  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:11.377219  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:11.377308  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:11.402618  293728 cri.go:89] found id: ""
	I1206 10:11:11.402644  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.402652  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:11.402659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:11.402745  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:11.429503  293728 cri.go:89] found id: ""
	I1206 10:11:11.429529  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.429538  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:11.429547  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:11.429562  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:11.486599  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:11.486638  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:11.500957  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:11.500987  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:11.577987  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:11.568882   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.569760   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.571647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.572318   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.573801   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:11.568882   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.569760   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.571647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.572318   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.573801   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:11.578008  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:11.578021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:11.604993  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:11.605027  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:14.137875  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:14.148737  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:14.148811  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:14.173594  293728 cri.go:89] found id: ""
	I1206 10:11:14.173671  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.173695  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:14.173714  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:14.173809  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:14.200007  293728 cri.go:89] found id: ""
	I1206 10:11:14.200033  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.200043  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:14.200050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:14.200117  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:14.233924  293728 cri.go:89] found id: ""
	I1206 10:11:14.233951  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.233959  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:14.233966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:14.234030  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:14.264436  293728 cri.go:89] found id: ""
	I1206 10:11:14.264464  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.264474  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:14.264480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:14.264540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:14.292320  293728 cri.go:89] found id: ""
	I1206 10:11:14.292348  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.292359  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:14.292365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:14.292426  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:14.317612  293728 cri.go:89] found id: ""
	I1206 10:11:14.317640  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.317649  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:14.317656  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:14.317714  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:14.342496  293728 cri.go:89] found id: ""
	I1206 10:11:14.342521  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.342530  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:14.342536  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:14.342596  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:14.368247  293728 cri.go:89] found id: ""
	I1206 10:11:14.368273  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.368282  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:14.368292  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:14.368304  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:14.394942  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:14.394976  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:14.428315  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:14.428345  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:14.484824  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:14.484855  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:14.498675  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:14.498705  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:14.568051  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:14.559253   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.560001   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.561736   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.562345   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.564094   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:14.559253   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.560001   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.561736   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.562345   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.564094   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:17.068293  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:17.078902  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:17.078976  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:17.103674  293728 cri.go:89] found id: ""
	I1206 10:11:17.103699  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.103708  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:17.103715  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:17.103777  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:17.139412  293728 cri.go:89] found id: ""
	I1206 10:11:17.139481  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.139503  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:17.139523  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:17.139610  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:17.168435  293728 cri.go:89] found id: ""
	I1206 10:11:17.168461  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.168470  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:17.168476  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:17.168568  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:17.198788  293728 cri.go:89] found id: ""
	I1206 10:11:17.198854  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.198879  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:17.198898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:17.198983  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:17.233132  293728 cri.go:89] found id: ""
	I1206 10:11:17.233218  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.233242  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:17.233262  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:17.233356  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:17.268547  293728 cri.go:89] found id: ""
	I1206 10:11:17.268613  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.268637  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:17.268655  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:17.268741  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:17.303935  293728 cri.go:89] found id: ""
	I1206 10:11:17.303957  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.303966  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:17.303972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:17.304032  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:17.328050  293728 cri.go:89] found id: ""
	I1206 10:11:17.328074  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.328084  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:17.328092  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:17.328139  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:17.387715  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:17.387750  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:17.401545  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:17.401576  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:17.467905  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:17.459187   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.459639   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461308   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461736   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.463309   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:17.459187   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.459639   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461308   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461736   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.463309   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:17.467927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:17.467939  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:17.493972  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:17.494007  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:20.027522  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:20.040220  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:20.040323  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:20.068566  293728 cri.go:89] found id: ""
	I1206 10:11:20.068592  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.068602  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:20.068610  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:20.068691  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:20.096577  293728 cri.go:89] found id: ""
	I1206 10:11:20.096616  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.096626  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:20.096633  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:20.096791  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:20.125150  293728 cri.go:89] found id: ""
	I1206 10:11:20.125175  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.125185  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:20.125192  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:20.125253  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:20.151199  293728 cri.go:89] found id: ""
	I1206 10:11:20.151225  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.151234  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:20.151241  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:20.151303  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:20.177323  293728 cri.go:89] found id: ""
	I1206 10:11:20.177349  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.177359  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:20.177365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:20.177454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:20.207914  293728 cri.go:89] found id: ""
	I1206 10:11:20.207940  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.207950  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:20.207956  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:20.208015  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:20.250213  293728 cri.go:89] found id: ""
	I1206 10:11:20.250247  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.250256  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:20.250265  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:20.250336  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:20.284320  293728 cri.go:89] found id: ""
	I1206 10:11:20.284356  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.284365  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:20.284374  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:20.284384  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:20.317496  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:20.317524  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:20.373988  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:20.374021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:20.387702  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:20.387728  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:20.454347  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:20.446421   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.447014   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448572   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448979   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.450465   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:20.446421   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.447014   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448572   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448979   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.450465   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:20.454370  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:20.454383  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:22.980202  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:22.991835  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:22.991961  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:23.022299  293728 cri.go:89] found id: ""
	I1206 10:11:23.022379  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.022404  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:23.022423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:23.022532  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:23.055611  293728 cri.go:89] found id: ""
	I1206 10:11:23.055634  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.055643  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:23.055649  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:23.055708  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:23.080752  293728 cri.go:89] found id: ""
	I1206 10:11:23.080828  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.080850  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:23.080870  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:23.080965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:23.106107  293728 cri.go:89] found id: ""
	I1206 10:11:23.106134  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.106143  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:23.106150  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:23.106212  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:23.132303  293728 cri.go:89] found id: ""
	I1206 10:11:23.132327  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.132335  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:23.132342  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:23.132408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:23.156632  293728 cri.go:89] found id: ""
	I1206 10:11:23.156697  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.156712  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:23.156719  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:23.156775  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:23.180697  293728 cri.go:89] found id: ""
	I1206 10:11:23.180764  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.180777  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:23.180784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:23.180842  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:23.208267  293728 cri.go:89] found id: ""
	I1206 10:11:23.208341  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.208364  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:23.208387  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:23.208425  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:23.292598  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:23.283687   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.284573   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.286441   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.287115   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.288724   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:23.283687   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.284573   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.286441   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.287115   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.288724   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:23.292618  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:23.292631  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:23.318604  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:23.318641  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:23.352649  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:23.352676  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:23.411769  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:23.411803  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:25.925870  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:25.936619  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:25.936701  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:25.963699  293728 cri.go:89] found id: ""
	I1206 10:11:25.963722  293728 logs.go:282] 0 containers: []
	W1206 10:11:25.963731  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:25.963738  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:25.963802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:25.995991  293728 cri.go:89] found id: ""
	I1206 10:11:25.996066  293728 logs.go:282] 0 containers: []
	W1206 10:11:25.996088  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:25.996106  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:25.996196  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:26.030700  293728 cri.go:89] found id: ""
	I1206 10:11:26.030728  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.030738  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:26.030745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:26.030809  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:26.066012  293728 cri.go:89] found id: ""
	I1206 10:11:26.066044  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.066054  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:26.066060  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:26.066125  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:26.092723  293728 cri.go:89] found id: ""
	I1206 10:11:26.092753  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.092763  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:26.092769  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:26.092837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:26.120031  293728 cri.go:89] found id: ""
	I1206 10:11:26.120108  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.120125  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:26.120132  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:26.120198  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:26.147104  293728 cri.go:89] found id: ""
	I1206 10:11:26.147131  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.147152  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:26.147158  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:26.147257  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:26.173188  293728 cri.go:89] found id: ""
	I1206 10:11:26.173212  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.173221  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:26.173230  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:26.173273  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:26.259536  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:26.250765   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.251710   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253385   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253690   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.255208   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:26.250765   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.251710   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253385   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253690   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.255208   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:26.259581  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:26.259596  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:26.288770  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:26.288853  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:26.318991  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:26.319082  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:26.377710  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:26.377743  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:28.892920  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:28.903557  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:28.903622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:28.928667  293728 cri.go:89] found id: ""
	I1206 10:11:28.928691  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.928699  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:28.928707  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:28.928767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:28.953528  293728 cri.go:89] found id: ""
	I1206 10:11:28.953554  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.953562  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:28.953568  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:28.953626  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:28.981995  293728 cri.go:89] found id: ""
	I1206 10:11:28.982022  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.982031  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:28.982037  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:28.982101  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:29.021133  293728 cri.go:89] found id: ""
	I1206 10:11:29.021161  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.021170  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:29.021177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:29.021244  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:29.051961  293728 cri.go:89] found id: ""
	I1206 10:11:29.052044  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.052056  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:29.052063  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:29.052157  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:29.076239  293728 cri.go:89] found id: ""
	I1206 10:11:29.076260  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.076268  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:29.076274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:29.076331  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:29.100533  293728 cri.go:89] found id: ""
	I1206 10:11:29.100568  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.100577  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:29.100583  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:29.100642  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:29.125877  293728 cri.go:89] found id: ""
	I1206 10:11:29.125900  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.125909  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:29.125917  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:29.125929  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:29.184407  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:29.184441  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:29.198478  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:29.198553  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:29.291075  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:29.280844   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.281788   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285131   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285582   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.287240   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:29.280844   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.281788   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285131   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285582   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.287240   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:29.291096  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:29.291109  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:29.317026  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:29.317059  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:31.845985  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:31.857066  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:31.857145  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:31.882982  293728 cri.go:89] found id: ""
	I1206 10:11:31.883059  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.883081  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:31.883101  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:31.883187  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:31.908108  293728 cri.go:89] found id: ""
	I1206 10:11:31.908138  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.908148  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:31.908154  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:31.908244  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:31.933164  293728 cri.go:89] found id: ""
	I1206 10:11:31.933188  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.933197  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:31.933204  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:31.933261  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:31.961760  293728 cri.go:89] found id: ""
	I1206 10:11:31.961784  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.961792  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:31.961798  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:31.961864  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:31.993806  293728 cri.go:89] found id: ""
	I1206 10:11:31.993836  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.993845  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:31.993851  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:31.993915  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:32.025453  293728 cri.go:89] found id: ""
	I1206 10:11:32.025480  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.025489  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:32.025496  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:32.025556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:32.053138  293728 cri.go:89] found id: ""
	I1206 10:11:32.053160  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.053171  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:32.053177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:32.053236  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:32.084984  293728 cri.go:89] found id: ""
	I1206 10:11:32.085009  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.085018  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:32.085027  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:32.085058  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:32.113246  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:32.113276  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:32.170516  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:32.170553  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:32.184767  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:32.184797  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:32.266194  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:32.257320   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.258649   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.259490   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.260223   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.261917   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:32.257320   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.258649   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.259490   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.260223   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.261917   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:32.266261  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:32.266289  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:34.798474  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:34.809168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:34.809239  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:34.837292  293728 cri.go:89] found id: ""
	I1206 10:11:34.837314  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.837322  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:34.837329  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:34.837387  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:34.863331  293728 cri.go:89] found id: ""
	I1206 10:11:34.863353  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.863362  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:34.863369  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:34.863465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:34.893355  293728 cri.go:89] found id: ""
	I1206 10:11:34.893379  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.893388  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:34.893395  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:34.893452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:34.919127  293728 cri.go:89] found id: ""
	I1206 10:11:34.919153  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.919162  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:34.919169  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:34.919228  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:34.948423  293728 cri.go:89] found id: ""
	I1206 10:11:34.948448  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.948458  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:34.948467  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:34.948526  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:34.984476  293728 cri.go:89] found id: ""
	I1206 10:11:34.984503  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.984513  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:34.984520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:34.984579  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:35.017804  293728 cri.go:89] found id: ""
	I1206 10:11:35.017831  293728 logs.go:282] 0 containers: []
	W1206 10:11:35.017840  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:35.017847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:35.017955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:35.049243  293728 cri.go:89] found id: ""
	I1206 10:11:35.049270  293728 logs.go:282] 0 containers: []
	W1206 10:11:35.049279  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:35.049288  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:35.049300  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:35.109333  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:35.109371  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:35.123612  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:35.123643  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:35.191474  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:35.181616   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.182533   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184226   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184809   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.186401   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:35.181616   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.182533   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184226   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184809   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.186401   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:35.191495  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:35.191509  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:35.217926  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:35.218007  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:37.758372  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:37.769553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:37.769625  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:37.799573  293728 cri.go:89] found id: ""
	I1206 10:11:37.799606  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.799617  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:37.799626  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:37.799697  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:37.828542  293728 cri.go:89] found id: ""
	I1206 10:11:37.828580  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.828589  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:37.828595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:37.828670  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:37.854197  293728 cri.go:89] found id: ""
	I1206 10:11:37.854223  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.854233  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:37.854239  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:37.854299  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:37.879147  293728 cri.go:89] found id: ""
	I1206 10:11:37.879220  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.879243  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:37.879261  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:37.879346  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:37.905390  293728 cri.go:89] found id: ""
	I1206 10:11:37.905412  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.905421  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:37.905428  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:37.905533  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:37.933187  293728 cri.go:89] found id: ""
	I1206 10:11:37.933251  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.933266  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:37.933273  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:37.933333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:37.957719  293728 cri.go:89] found id: ""
	I1206 10:11:37.957743  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.957756  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:37.957763  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:37.957823  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:37.991726  293728 cri.go:89] found id: ""
	I1206 10:11:37.991755  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.991765  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:37.991775  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:37.991787  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:38.072266  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:38.063102   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.063715   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.065465   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.066011   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.067888   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:38.063102   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.063715   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.065465   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.066011   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.067888   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:38.072293  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:38.072308  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:38.100264  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:38.100302  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:38.128959  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:38.128989  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:38.186487  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:38.186517  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:40.700896  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:40.711768  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:40.711841  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:40.737641  293728 cri.go:89] found id: ""
	I1206 10:11:40.737664  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.737675  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:40.737681  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:40.737740  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:40.763410  293728 cri.go:89] found id: ""
	I1206 10:11:40.763437  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.763447  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:40.763453  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:40.763521  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:40.788254  293728 cri.go:89] found id: ""
	I1206 10:11:40.788277  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.788287  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:40.788293  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:40.788351  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:40.812429  293728 cri.go:89] found id: ""
	I1206 10:11:40.812454  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.812464  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:40.812470  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:40.812577  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:40.836598  293728 cri.go:89] found id: ""
	I1206 10:11:40.836623  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.836632  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:40.836639  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:40.836699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:40.865558  293728 cri.go:89] found id: ""
	I1206 10:11:40.865584  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.865593  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:40.865600  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:40.865658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:40.890394  293728 cri.go:89] found id: ""
	I1206 10:11:40.890419  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.890428  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:40.890434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:40.890494  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:40.919443  293728 cri.go:89] found id: ""
	I1206 10:11:40.919471  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.919480  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:40.919489  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:40.919501  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:40.932761  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:40.932788  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:41.018904  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:41.007696   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.008625   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.010702   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.011857   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.013002   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:41.007696   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.008625   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.010702   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.011857   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.013002   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:41.018927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:41.018942  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:41.049613  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:41.049648  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:41.077525  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:41.077552  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:43.637314  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:43.648009  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:43.648084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:43.673268  293728 cri.go:89] found id: ""
	I1206 10:11:43.673291  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.673299  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:43.673306  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:43.673363  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:43.698533  293728 cri.go:89] found id: ""
	I1206 10:11:43.698563  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.698573  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:43.698579  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:43.698666  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:43.726409  293728 cri.go:89] found id: ""
	I1206 10:11:43.726434  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.726443  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:43.726449  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:43.726524  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:43.753336  293728 cri.go:89] found id: ""
	I1206 10:11:43.753361  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.753371  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:43.753377  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:43.753468  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:43.778503  293728 cri.go:89] found id: ""
	I1206 10:11:43.778526  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.778535  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:43.778541  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:43.778622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:43.806530  293728 cri.go:89] found id: ""
	I1206 10:11:43.806554  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.806564  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:43.806570  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:43.806652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:43.831543  293728 cri.go:89] found id: ""
	I1206 10:11:43.831570  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.831579  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:43.831585  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:43.831644  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:43.856767  293728 cri.go:89] found id: ""
	I1206 10:11:43.856791  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.856800  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:43.856808  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:43.856821  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:43.926714  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:43.918532   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.919086   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.920754   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.921218   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.922816   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:43.918532   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.919086   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.920754   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.921218   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.922816   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:43.926736  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:43.926751  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:43.953140  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:43.953176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:43.986579  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:43.986611  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:44.046797  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:44.046832  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:46.561087  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:46.574475  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:46.574548  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:46.603568  293728 cri.go:89] found id: ""
	I1206 10:11:46.603593  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.603601  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:46.603608  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:46.603688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:46.629999  293728 cri.go:89] found id: ""
	I1206 10:11:46.630024  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.630034  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:46.630040  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:46.630120  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:46.657373  293728 cri.go:89] found id: ""
	I1206 10:11:46.657399  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.657408  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:46.657414  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:46.657472  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:46.682131  293728 cri.go:89] found id: ""
	I1206 10:11:46.682157  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.682166  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:46.682172  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:46.682229  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:46.712112  293728 cri.go:89] found id: ""
	I1206 10:11:46.712184  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.712201  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:46.712209  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:46.712273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:46.737272  293728 cri.go:89] found id: ""
	I1206 10:11:46.737308  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.737317  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:46.737323  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:46.737402  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:46.762747  293728 cri.go:89] found id: ""
	I1206 10:11:46.762773  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.762782  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:46.762814  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:46.762904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:46.789056  293728 cri.go:89] found id: ""
	I1206 10:11:46.789092  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.789101  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:46.789110  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:46.789122  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:46.852031  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:46.843591   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.844469   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846096   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846414   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.847930   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:46.843591   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.844469   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846096   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846414   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.847930   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:46.852055  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:46.852068  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:46.878458  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:46.878490  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:46.909497  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:46.909523  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:46.966671  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:46.966706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:49.484723  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:49.499040  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:49.499143  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:49.530152  293728 cri.go:89] found id: ""
	I1206 10:11:49.530195  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.530204  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:49.530228  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:49.530311  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:49.556277  293728 cri.go:89] found id: ""
	I1206 10:11:49.556302  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.556311  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:49.556317  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:49.556422  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:49.582278  293728 cri.go:89] found id: ""
	I1206 10:11:49.582303  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.582312  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:49.582318  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:49.582386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:49.608504  293728 cri.go:89] found id: ""
	I1206 10:11:49.608529  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.608538  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:49.608544  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:49.608624  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:49.633347  293728 cri.go:89] found id: ""
	I1206 10:11:49.633414  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.633429  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:49.633436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:49.633495  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:49.658195  293728 cri.go:89] found id: ""
	I1206 10:11:49.658223  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.658233  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:49.658240  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:49.658297  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:49.691086  293728 cri.go:89] found id: ""
	I1206 10:11:49.691112  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.691122  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:49.691128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:49.691213  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:49.716625  293728 cri.go:89] found id: ""
	I1206 10:11:49.716652  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.716661  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:49.716669  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:49.716684  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:49.778048  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:49.778093  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:49.792187  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:49.792216  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:49.858528  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:49.849703   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.850362   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852120   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852678   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.854314   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:49.849703   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.850362   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852120   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852678   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.854314   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:49.858551  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:49.858566  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:49.884659  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:49.884691  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:52.413397  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:52.424250  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:52.424322  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:52.454481  293728 cri.go:89] found id: ""
	I1206 10:11:52.454557  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.454573  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:52.454581  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:52.454642  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:52.487281  293728 cri.go:89] found id: ""
	I1206 10:11:52.487315  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.487325  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:52.487331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:52.487408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:52.522975  293728 cri.go:89] found id: ""
	I1206 10:11:52.523008  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.523025  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:52.523032  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:52.523102  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:52.557389  293728 cri.go:89] found id: ""
	I1206 10:11:52.557421  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.557430  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:52.557436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:52.557494  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:52.583449  293728 cri.go:89] found id: ""
	I1206 10:11:52.583474  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.583483  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:52.583490  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:52.583608  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:52.608370  293728 cri.go:89] found id: ""
	I1206 10:11:52.608412  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.608422  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:52.608429  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:52.608499  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:52.637950  293728 cri.go:89] found id: ""
	I1206 10:11:52.638026  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.638051  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:52.638069  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:52.638160  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:52.663271  293728 cri.go:89] found id: ""
	I1206 10:11:52.663349  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.663413  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:52.663443  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:52.663464  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:52.721303  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:52.721339  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:52.735517  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:52.735548  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:52.806629  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:52.798101   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.799086   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800264   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800722   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.802387   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:52.798101   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.799086   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800264   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800722   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.802387   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:52.806652  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:52.806666  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:52.834909  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:52.834944  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:55.365104  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:55.376039  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:55.376112  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:55.401088  293728 cri.go:89] found id: ""
	I1206 10:11:55.401114  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.401123  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:55.401130  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:55.401187  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:55.426712  293728 cri.go:89] found id: ""
	I1206 10:11:55.426735  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.426744  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:55.426752  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:55.426808  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:55.453355  293728 cri.go:89] found id: ""
	I1206 10:11:55.453433  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.453449  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:55.453456  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:55.453524  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:55.482694  293728 cri.go:89] found id: ""
	I1206 10:11:55.482786  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.482809  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:55.482831  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:55.482965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:55.517524  293728 cri.go:89] found id: ""
	I1206 10:11:55.517567  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.517576  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:55.517582  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:55.517651  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:55.552808  293728 cri.go:89] found id: ""
	I1206 10:11:55.552887  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.552919  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:55.552943  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:55.553051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:55.582318  293728 cri.go:89] found id: ""
	I1206 10:11:55.582391  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.582413  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:55.582435  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:55.582545  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:55.611979  293728 cri.go:89] found id: ""
	I1206 10:11:55.612012  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.612021  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:55.612030  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:55.612043  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:55.641663  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:55.641691  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:55.699247  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:55.699281  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:55.714284  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:55.714312  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:55.779980  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:55.771718   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.772511   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774153   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774506   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.776084   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:55.771718   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.772511   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774153   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774506   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.776084   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:55.780002  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:55.780020  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:58.307533  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:58.318444  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:58.318517  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:58.346128  293728 cri.go:89] found id: ""
	I1206 10:11:58.346181  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.346194  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:58.346202  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:58.346276  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:58.370957  293728 cri.go:89] found id: ""
	I1206 10:11:58.370992  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.371001  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:58.371013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:58.371093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:58.397685  293728 cri.go:89] found id: ""
	I1206 10:11:58.397717  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.397726  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:58.397732  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:58.397803  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:58.426933  293728 cri.go:89] found id: ""
	I1206 10:11:58.426959  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.426967  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:58.426973  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:58.427051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:58.456330  293728 cri.go:89] found id: ""
	I1206 10:11:58.456365  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.456375  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:58.456381  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:58.456448  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:58.494975  293728 cri.go:89] found id: ""
	I1206 10:11:58.495018  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.495027  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:58.495034  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:58.495106  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:58.532346  293728 cri.go:89] found id: ""
	I1206 10:11:58.532379  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.532389  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:58.532395  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:58.532465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:58.558540  293728 cri.go:89] found id: ""
	I1206 10:11:58.558576  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.558584  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:58.558593  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:58.558605  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:58.573220  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:58.573249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:58.639437  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:58.631044   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.631569   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633054   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633435   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.634868   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:58.631044   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.631569   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633054   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633435   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.634868   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:58.639512  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:58.639535  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:58.664823  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:58.664861  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:58.692934  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:58.692966  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:01.250858  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:01.262935  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:01.263112  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:01.291081  293728 cri.go:89] found id: ""
	I1206 10:12:01.291107  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.291117  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:01.291123  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:01.291204  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:01.318105  293728 cri.go:89] found id: ""
	I1206 10:12:01.318138  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.318147  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:01.318168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:01.318249  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:01.344419  293728 cri.go:89] found id: ""
	I1206 10:12:01.344488  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.344514  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:01.344528  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:01.344601  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:01.370652  293728 cri.go:89] found id: ""
	I1206 10:12:01.370677  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.370686  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:01.370693  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:01.370751  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:01.397501  293728 cri.go:89] found id: ""
	I1206 10:12:01.397528  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.397538  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:01.397544  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:01.397603  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:01.423444  293728 cri.go:89] found id: ""
	I1206 10:12:01.423517  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.423541  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:01.423563  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:01.423646  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:01.453268  293728 cri.go:89] found id: ""
	I1206 10:12:01.453294  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.453303  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:01.453316  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:01.453417  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:01.481810  293728 cri.go:89] found id: ""
	I1206 10:12:01.481890  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.481915  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:01.481932  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:01.481959  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:01.538994  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:01.539079  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:01.553293  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:01.553320  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:01.623989  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:01.612749   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.615513   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.616460   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618024   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618347   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:01.612749   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.615513   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.616460   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618024   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618347   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:01.624063  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:01.624085  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:01.649724  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:01.649757  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:04.179886  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:04.191201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:04.191273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:04.216964  293728 cri.go:89] found id: ""
	I1206 10:12:04.217045  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.217065  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:04.217072  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:04.217168  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:04.252840  293728 cri.go:89] found id: ""
	I1206 10:12:04.252875  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.252884  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:04.252891  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:04.252965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:04.281583  293728 cri.go:89] found id: ""
	I1206 10:12:04.281614  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.281623  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:04.281629  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:04.281695  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:04.311479  293728 cri.go:89] found id: ""
	I1206 10:12:04.311547  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.311571  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:04.311585  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:04.311658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:04.337184  293728 cri.go:89] found id: ""
	I1206 10:12:04.337213  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.337221  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:04.337228  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:04.337307  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:04.363672  293728 cri.go:89] found id: ""
	I1206 10:12:04.363705  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.363715  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:04.363738  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:04.363836  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:04.394214  293728 cri.go:89] found id: ""
	I1206 10:12:04.394240  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.394249  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:04.394256  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:04.394367  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:04.419254  293728 cri.go:89] found id: ""
	I1206 10:12:04.419335  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.419359  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:04.419403  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:04.419437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:04.451555  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:04.451582  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:04.509304  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:04.509336  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:04.523821  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:04.523848  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:04.591566  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:04.581768   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.583295   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.584171   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.585971   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.586453   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:04.581768   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.583295   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.584171   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.585971   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.586453   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:04.591591  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:04.591604  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:07.121570  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:07.132505  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:07.132585  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:07.157021  293728 cri.go:89] found id: ""
	I1206 10:12:07.157047  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.157056  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:07.157063  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:07.157151  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:07.182478  293728 cri.go:89] found id: ""
	I1206 10:12:07.182510  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.182519  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:07.182526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:07.182597  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:07.212401  293728 cri.go:89] found id: ""
	I1206 10:12:07.212424  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.212433  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:07.212439  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:07.212498  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:07.246228  293728 cri.go:89] found id: ""
	I1206 10:12:07.246255  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.246264  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:07.246271  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:07.246333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:07.273777  293728 cri.go:89] found id: ""
	I1206 10:12:07.273802  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.273811  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:07.273817  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:07.273878  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:07.302425  293728 cri.go:89] found id: ""
	I1206 10:12:07.302464  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.302473  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:07.302481  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:07.302556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:07.328379  293728 cri.go:89] found id: ""
	I1206 10:12:07.328403  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.328412  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:07.328418  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:07.328476  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:07.358727  293728 cri.go:89] found id: ""
	I1206 10:12:07.358751  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.358760  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:07.358771  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:07.358811  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:07.415522  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:07.415561  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:07.429309  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:07.429338  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:07.497723  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:07.488450   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.488945   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.490709   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.491285   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.492907   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:07.488450   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.488945   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.490709   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.491285   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.492907   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:07.497749  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:07.497762  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:07.524612  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:07.524648  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:10.055528  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:10.066871  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:10.066968  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:10.092582  293728 cri.go:89] found id: ""
	I1206 10:12:10.092611  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.092622  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:10.092630  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:10.092695  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:10.120230  293728 cri.go:89] found id: ""
	I1206 10:12:10.120321  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.120347  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:10.120366  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:10.120465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:10.146387  293728 cri.go:89] found id: ""
	I1206 10:12:10.146464  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.146489  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:10.146508  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:10.146582  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:10.173457  293728 cri.go:89] found id: ""
	I1206 10:12:10.173484  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.173493  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:10.173500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:10.173592  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:10.202187  293728 cri.go:89] found id: ""
	I1206 10:12:10.202262  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.202285  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:10.202303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:10.202393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:10.232838  293728 cri.go:89] found id: ""
	I1206 10:12:10.232901  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.232922  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:10.232940  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:10.233025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:10.267445  293728 cri.go:89] found id: ""
	I1206 10:12:10.267520  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.267543  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:10.267561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:10.267650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:10.298314  293728 cri.go:89] found id: ""
	I1206 10:12:10.298389  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.298412  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:10.298434  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:10.298472  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:10.325341  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:10.325374  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:10.385049  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:10.385081  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:10.398513  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:10.398540  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:10.463844  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:10.454441   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.455251   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457119   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457874   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.459632   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:10.454441   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.455251   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457119   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457874   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.459632   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:10.463908  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:10.463945  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:12.991294  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:13.006571  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:13.006645  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:13.040431  293728 cri.go:89] found id: ""
	I1206 10:12:13.040457  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.040466  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:13.040479  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:13.040544  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:13.066025  293728 cri.go:89] found id: ""
	I1206 10:12:13.066047  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.066056  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:13.066062  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:13.066134  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:13.093459  293728 cri.go:89] found id: ""
	I1206 10:12:13.093482  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.093491  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:13.093496  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:13.093556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:13.118066  293728 cri.go:89] found id: ""
	I1206 10:12:13.118089  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.118098  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:13.118104  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:13.118162  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:13.145619  293728 cri.go:89] found id: ""
	I1206 10:12:13.145685  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.145704  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:13.145711  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:13.145770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:13.174833  293728 cri.go:89] found id: ""
	I1206 10:12:13.174857  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.174866  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:13.174872  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:13.174934  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:13.200490  293728 cri.go:89] found id: ""
	I1206 10:12:13.200517  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.200526  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:13.200532  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:13.200590  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:13.243683  293728 cri.go:89] found id: ""
	I1206 10:12:13.243709  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.243718  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:13.243726  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:13.243741  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:13.279303  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:13.279330  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:13.337861  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:13.337897  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:13.351559  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:13.351634  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:13.413990  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:13.406460   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.406956   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408410   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408802   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.410225   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:13.406460   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.406956   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408410   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408802   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.410225   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:13.414012  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:13.414028  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:15.940438  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:15.952379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:15.952452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:15.997713  293728 cri.go:89] found id: ""
	I1206 10:12:15.997741  293728 logs.go:282] 0 containers: []
	W1206 10:12:15.997749  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:15.997755  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:15.997814  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:16.027447  293728 cri.go:89] found id: ""
	I1206 10:12:16.027477  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.027486  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:16.027494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:16.027552  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:16.056201  293728 cri.go:89] found id: ""
	I1206 10:12:16.056224  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.056232  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:16.056238  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:16.056296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:16.080619  293728 cri.go:89] found id: ""
	I1206 10:12:16.080641  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.080650  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:16.080657  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:16.080736  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:16.106294  293728 cri.go:89] found id: ""
	I1206 10:12:16.106316  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.106324  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:16.106330  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:16.106393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:16.131999  293728 cri.go:89] found id: ""
	I1206 10:12:16.132026  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.132036  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:16.132042  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:16.132103  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:16.156693  293728 cri.go:89] found id: ""
	I1206 10:12:16.156719  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.156734  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:16.156740  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:16.156819  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:16.182391  293728 cri.go:89] found id: ""
	I1206 10:12:16.182416  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.182426  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:16.182436  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:16.182467  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:16.262961  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:16.251126   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.252302   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.253220   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257326   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257858   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:16.251126   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.252302   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.253220   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257326   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257858   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:16.262991  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:16.263024  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:16.292146  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:16.292180  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:16.323803  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:16.323830  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:16.382496  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:16.382530  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:18.896413  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:18.906898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:18.907007  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:18.930731  293728 cri.go:89] found id: ""
	I1206 10:12:18.930763  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.930773  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:18.930779  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:18.930844  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:18.955309  293728 cri.go:89] found id: ""
	I1206 10:12:18.955334  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.955343  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:18.955349  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:18.955428  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:18.987453  293728 cri.go:89] found id: ""
	I1206 10:12:18.987480  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.987489  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:18.987495  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:18.987559  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:19.016315  293728 cri.go:89] found id: ""
	I1206 10:12:19.016359  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.016369  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:19.016376  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:19.016457  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:19.046838  293728 cri.go:89] found id: ""
	I1206 10:12:19.046914  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.046939  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:19.046958  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:19.047088  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:19.076303  293728 cri.go:89] found id: ""
	I1206 10:12:19.076339  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.076348  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:19.076355  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:19.076424  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:19.100478  293728 cri.go:89] found id: ""
	I1206 10:12:19.100505  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.100514  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:19.100520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:19.100600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:19.125238  293728 cri.go:89] found id: ""
	I1206 10:12:19.125303  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.125317  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:19.125327  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:19.125338  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:19.181824  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:19.181858  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:19.195937  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:19.195963  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:19.288898  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:19.278661   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.279479   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.281324   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.282074   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.284115   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:19.278661   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.279479   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.281324   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.282074   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.284115   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:19.288922  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:19.288935  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:19.314454  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:19.314487  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:21.845581  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:21.856143  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:21.856207  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:21.880174  293728 cri.go:89] found id: ""
	I1206 10:12:21.880197  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.880206  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:21.880212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:21.880273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:21.906162  293728 cri.go:89] found id: ""
	I1206 10:12:21.906195  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.906204  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:21.906209  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:21.906277  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:21.929912  293728 cri.go:89] found id: ""
	I1206 10:12:21.929936  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.929945  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:21.929951  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:21.930017  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:21.955257  293728 cri.go:89] found id: ""
	I1206 10:12:21.955288  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.955297  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:21.955303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:21.955403  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:21.988656  293728 cri.go:89] found id: ""
	I1206 10:12:21.988682  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.988691  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:21.988698  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:21.988766  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:22.026205  293728 cri.go:89] found id: ""
	I1206 10:12:22.026232  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.026241  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:22.026248  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:22.026321  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:22.056883  293728 cri.go:89] found id: ""
	I1206 10:12:22.056906  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.056915  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:22.056923  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:22.056983  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:22.087245  293728 cri.go:89] found id: ""
	I1206 10:12:22.087269  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.087277  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:22.087286  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:22.087296  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:22.148181  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:22.148213  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:22.161924  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:22.161952  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:22.238449  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:22.229386   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.230236   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.231860   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.232500   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.234010   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:22.229386   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.230236   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.231860   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.232500   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.234010   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:22.238523  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:22.238550  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:22.268691  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:22.268765  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:24.800715  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:24.811471  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:24.811557  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:24.836234  293728 cri.go:89] found id: ""
	I1206 10:12:24.836261  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.836270  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:24.836277  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:24.836335  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:24.861915  293728 cri.go:89] found id: ""
	I1206 10:12:24.861942  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.861951  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:24.861957  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:24.862015  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:24.886931  293728 cri.go:89] found id: ""
	I1206 10:12:24.886958  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.886968  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:24.886974  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:24.887058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:24.913606  293728 cri.go:89] found id: ""
	I1206 10:12:24.913633  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.913642  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:24.913649  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:24.913708  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:24.942656  293728 cri.go:89] found id: ""
	I1206 10:12:24.942690  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.942699  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:24.942706  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:24.942772  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:24.973528  293728 cri.go:89] found id: ""
	I1206 10:12:24.973563  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.973572  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:24.973579  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:24.973654  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:25.011969  293728 cri.go:89] found id: ""
	I1206 10:12:25.012007  293728 logs.go:282] 0 containers: []
	W1206 10:12:25.012017  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:25.012024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:25.012105  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:25.041306  293728 cri.go:89] found id: ""
	I1206 10:12:25.041340  293728 logs.go:282] 0 containers: []
	W1206 10:12:25.041349  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:25.041363  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:25.041377  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:25.068464  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:25.068503  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:25.098409  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:25.098436  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:25.156122  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:25.156158  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:25.170373  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:25.170405  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:25.248624  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:25.240035   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.240794   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.242472   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.243030   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.244596   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:25.240035   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.240794   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.242472   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.243030   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.244596   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:27.748906  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:27.759522  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:27.759591  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:27.785227  293728 cri.go:89] found id: ""
	I1206 10:12:27.785250  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.785258  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:27.785264  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:27.785319  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:27.810979  293728 cri.go:89] found id: ""
	I1206 10:12:27.811011  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.811021  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:27.811028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:27.811085  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:27.837232  293728 cri.go:89] found id: ""
	I1206 10:12:27.837298  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.837313  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:27.837320  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:27.837376  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:27.861601  293728 cri.go:89] found id: ""
	I1206 10:12:27.861625  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.861634  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:27.861641  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:27.861699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:27.886862  293728 cri.go:89] found id: ""
	I1206 10:12:27.886887  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.886897  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:27.886903  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:27.886960  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:27.911189  293728 cri.go:89] found id: ""
	I1206 10:12:27.911213  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.911222  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:27.911229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:27.911285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:27.935326  293728 cri.go:89] found id: ""
	I1206 10:12:27.935352  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.935361  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:27.935368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:27.935452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:27.959524  293728 cri.go:89] found id: ""
	I1206 10:12:27.959545  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.959555  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:27.959564  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:27.959575  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:28.028099  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:28.028143  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:28.048460  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:28.048488  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:28.118674  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:28.109022   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.109888   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.111697   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.112355   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.114062   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:28.109022   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.109888   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.111697   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.112355   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.114062   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:28.118697  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:28.118709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:28.144591  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:28.144630  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:30.673088  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:30.683869  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:30.683949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:30.708341  293728 cri.go:89] found id: ""
	I1206 10:12:30.708364  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.708372  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:30.708379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:30.708434  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:30.734236  293728 cri.go:89] found id: ""
	I1206 10:12:30.734261  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.734270  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:30.734276  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:30.734333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:30.760476  293728 cri.go:89] found id: ""
	I1206 10:12:30.760499  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.760508  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:30.760520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:30.760580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:30.785771  293728 cri.go:89] found id: ""
	I1206 10:12:30.785793  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.785802  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:30.785808  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:30.785871  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:30.814408  293728 cri.go:89] found id: ""
	I1206 10:12:30.814431  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.814439  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:30.814445  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:30.814504  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:30.840084  293728 cri.go:89] found id: ""
	I1206 10:12:30.840108  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.840117  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:30.840124  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:30.840183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:30.865698  293728 cri.go:89] found id: ""
	I1206 10:12:30.865723  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.865732  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:30.865745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:30.865807  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:30.895469  293728 cri.go:89] found id: ""
	I1206 10:12:30.895538  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.895553  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:30.895562  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:30.895573  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:30.952609  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:30.952644  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:30.966729  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:30.966758  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:31.059967  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:31.049168   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.050975   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.051825   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.053830   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.054324   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:31.049168   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.050975   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.051825   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.053830   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.054324   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:31.059992  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:31.060006  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:31.087739  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:31.087785  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:33.618907  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:33.633558  293728 out.go:203] 
	W1206 10:12:33.636407  293728 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1206 10:12:33.636439  293728 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1206 10:12:33.636448  293728 out.go:285] * Related issues:
	W1206 10:12:33.636468  293728 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1206 10:12:33.636488  293728 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1206 10:12:33.640150  293728 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.180738951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.180841500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181051963Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181150721Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181229926Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181300302Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181371777Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181434227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181504595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181620526Z" level=info msg="Connect containerd service"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.182068703Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.183078485Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193499317Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193692279Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193840088Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193788608Z" level=info msg="Start recovering state"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235102688Z" level=info msg="Start event monitor"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235301393Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235445231Z" level=info msg="Start streaming server"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235540452Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235788569Z" level=info msg="runtime interface starting up..."
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235878794Z" level=info msg="starting plugins..."
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235966762Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:06:31 newest-cni-387337 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.238179492Z" level=info msg="containerd successfully booted in 0.085161s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:43.192185   13791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:43.193040   13791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:43.194577   13791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:43.195198   13791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:43.197066   13791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:12:43 up  1:55,  0 user,  load average: 0.69, 0.67, 1.27
	Linux newest-cni-387337 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:12:38 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:38 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:39 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:40 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:40 newest-cni-387337 kubelet[13638]: E1206 10:12:40.622728   13638 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:40 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:40 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:41 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 06 10:12:41 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:41 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:41 newest-cni-387337 kubelet[13688]: E1206 10:12:41.611030   13688 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:41 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:41 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:42 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 06 10:12:42 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:42 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:42 newest-cni-387337 kubelet[13695]: E1206 10:12:42.341300   13695 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:42 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:42 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:42 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 06 10:12:42 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:42 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:43 newest-cni-387337 kubelet[13762]: E1206 10:12:43.063716   13762 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:43 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:43 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (342.408884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-387337" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-387337
helpers_test.go:243: (dbg) docker inspect newest-cni-387337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	        "Created": "2025-12-06T09:56:17.358293629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:06:25.490985794Z",
	            "FinishedAt": "2025-12-06T10:06:24.07452303Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/hosts",
	        "LogPath": "/var/lib/docker/containers/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9/e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9-json.log",
	        "Name": "/newest-cni-387337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-387337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-387337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e89a14c7a996b24fcd96ea9993f5b5291f883ee077c95cb0fbcd49a300967fc9",
	                "LowerDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc3a55d4cbc5e00a478279c953d824476431f0ff3a26d71f28083040d615a4c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-387337",
	                "Source": "/var/lib/docker/volumes/newest-cni-387337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-387337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-387337",
	                "name.minikube.sigs.k8s.io": "newest-cni-387337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0237cbac4089b5971baf99dcc5f5da9d321416f1c02aecd4eecab8f5eca5da8a",
	            "SandboxKey": "/var/run/docker/netns/0237cbac4089",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-387337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:c0:9f:b1:4f:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f42a70d42248e7fb537c8957fc3c9ad0a04046b4da244cdde31b86ebc56a160b",
	                    "EndpointID": "315fc1e3324af45e0df5a53d34bf5d6797d7154b55022bdff9ab7809e194b0cf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-387337",
	                        "e89a14c7a996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (457.407871ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-387337 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-387337 logs -n 25: (1.583721356s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p embed-certs-100767 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-100767                                                                                                                                                                                                                                      │ embed-certs-100767           │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:53 UTC │ 06 Dec 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-837391 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:54 UTC │ 06 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:55 UTC │ 06 Dec 25 09:55 UTC │
	│ image   │ default-k8s-diff-port-837391 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-837391 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-837391                                                                                                                                                                                                                            │ default-k8s-diff-port-837391 │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │ 06 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-257359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-257359 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-257359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │ 06 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-257359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-257359            │ jenkins │ v1.37.0 │ 06 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-387337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:04 UTC │                     │
	│ stop    │ -p newest-cni-387337 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │ 06 Dec 25 10:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-387337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │ 06 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-387337 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:06 UTC │                     │
	│ image   │ newest-cni-387337 image list --format=json                                                                                                                                                                                                                 │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ pause   │ -p newest-cni-387337 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ unpause │ -p newest-cni-387337 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-387337            │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:06:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:06:25.195145  293728 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:06:25.195325  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195335  293728 out.go:374] Setting ErrFile to fd 2...
	I1206 10:06:25.195341  293728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:06:25.195634  293728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:06:25.196028  293728 out.go:368] Setting JSON to false
	I1206 10:06:25.196926  293728 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6537,"bootTime":1765009049,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:06:25.196997  293728 start.go:143] virtualization:  
	I1206 10:06:25.199959  293728 out.go:179] * [newest-cni-387337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:06:25.203880  293728 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:06:25.204017  293728 notify.go:221] Checking for updates...
	I1206 10:06:25.210368  293728 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:06:25.213374  293728 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:25.216371  293728 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:06:25.221036  293728 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:06:25.223973  293728 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:06:25.227572  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:25.228243  293728 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:06:25.261513  293728 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:06:25.261626  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.340601  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.331029372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.340708  293728 docker.go:319] overlay module found
	I1206 10:06:25.343872  293728 out.go:179] * Using the docker driver based on existing profile
	I1206 10:06:25.346835  293728 start.go:309] selected driver: docker
	I1206 10:06:25.346867  293728 start.go:927] validating driver "docker" against &{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.346969  293728 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:06:25.347911  293728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:06:25.407260  293728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:06:25.398348793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:06:25.407652  293728 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 10:06:25.407684  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:25.407750  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:25.407788  293728 start.go:353] cluster config:
	{Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:25.410983  293728 out.go:179] * Starting "newest-cni-387337" primary control-plane node in "newest-cni-387337" cluster
	I1206 10:06:25.413800  293728 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:06:25.416704  293728 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:06:25.419472  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:25.419517  293728 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:06:25.419530  293728 cache.go:65] Caching tarball of preloaded images
	I1206 10:06:25.419542  293728 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:06:25.419614  293728 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:06:25.419624  293728 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:06:25.419745  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.439065  293728 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:06:25.439097  293728 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:06:25.439117  293728 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:06:25.439151  293728 start.go:360] acquireMachinesLock for newest-cni-387337: {Name:mk92b9dcf5cb758030b3523b1daf9a8577526d2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:06:25.439218  293728 start.go:364] duration metric: took 44.948µs to acquireMachinesLock for "newest-cni-387337"
	I1206 10:06:25.439242  293728 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:06:25.439250  293728 fix.go:54] fixHost starting: 
	I1206 10:06:25.439553  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.455936  293728 fix.go:112] recreateIfNeeded on newest-cni-387337: state=Stopped err=<nil>
	W1206 10:06:25.455970  293728 fix.go:138] unexpected machine state, will restart: <nil>
	W1206 10:06:22.222571  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:24.223444  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:25.459174  293728 out.go:252] * Restarting existing docker container for "newest-cni-387337" ...
	I1206 10:06:25.459260  293728 cli_runner.go:164] Run: docker start newest-cni-387337
	I1206 10:06:25.713574  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:25.738668  293728 kic.go:430] container "newest-cni-387337" state is running.
	I1206 10:06:25.739140  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:25.765706  293728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/config.json ...
	I1206 10:06:25.766035  293728 machine.go:94] provisionDockerMachine start ...
	I1206 10:06:25.766147  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:25.787280  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:25.787973  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:25.787996  293728 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:06:25.789031  293728 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:06:28.943483  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:28.943510  293728 ubuntu.go:182] provisioning hostname "newest-cni-387337"
	I1206 10:06:28.943583  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:28.962379  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:28.962708  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:28.962726  293728 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-387337 && echo "newest-cni-387337" | sudo tee /etc/hostname
	I1206 10:06:29.136463  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-387337
	
	I1206 10:06:29.136552  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.155008  293728 main.go:143] libmachine: Using SSH client type: native
	I1206 10:06:29.155343  293728 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1206 10:06:29.155363  293728 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-387337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-387337/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-387337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:06:29.311555  293728 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:06:29.311646  293728 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:06:29.311703  293728 ubuntu.go:190] setting up certificates
	I1206 10:06:29.311733  293728 provision.go:84] configureAuth start
	I1206 10:06:29.311826  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.328361  293728 provision.go:143] copyHostCerts
	I1206 10:06:29.328435  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:06:29.328455  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:06:29.328532  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:06:29.328644  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:06:29.328655  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:06:29.328683  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:06:29.328754  293728 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:06:29.328763  293728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:06:29.328788  293728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:06:29.328850  293728 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.newest-cni-387337 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-387337]
	I1206 10:06:29.477422  293728 provision.go:177] copyRemoteCerts
	I1206 10:06:29.477497  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:06:29.477551  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.495349  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.603554  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:06:29.622338  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:06:29.641011  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:06:29.660417  293728 provision.go:87] duration metric: took 348.656521ms to configureAuth
	I1206 10:06:29.660488  293728 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:06:29.660700  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:29.660714  293728 machine.go:97] duration metric: took 3.894659315s to provisionDockerMachine
	I1206 10:06:29.660722  293728 start.go:293] postStartSetup for "newest-cni-387337" (driver="docker")
	I1206 10:06:29.660734  293728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:06:29.660787  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:06:29.660840  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.679336  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.792654  293728 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:06:29.796414  293728 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:06:29.796451  293728 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:06:29.796481  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:06:29.796555  293728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:06:29.796637  293728 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:06:29.796752  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:06:29.804466  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:29.822913  293728 start.go:296] duration metric: took 162.176035ms for postStartSetup
	I1206 10:06:29.822993  293728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:06:29.823033  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.841962  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.944706  293728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:06:29.949621  293728 fix.go:56] duration metric: took 4.510364001s for fixHost
	I1206 10:06:29.949690  293728 start.go:83] releasing machines lock for "newest-cni-387337", held for 4.510458303s
	I1206 10:06:29.949801  293728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-387337
	I1206 10:06:29.966982  293728 ssh_runner.go:195] Run: cat /version.json
	I1206 10:06:29.967044  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.967315  293728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:06:29.967425  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:29.989346  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:29.995399  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:30.108934  293728 ssh_runner.go:195] Run: systemctl --version
	W1206 10:06:26.722852  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:29.222555  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:30.251570  293728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:06:30.256600  293728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:06:30.256686  293728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:06:30.265366  293728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:06:30.265436  293728 start.go:496] detecting cgroup driver to use...
	I1206 10:06:30.265475  293728 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:06:30.265547  293728 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:06:30.285393  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:06:30.300014  293728 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:06:30.300101  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:06:30.316388  293728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:06:30.330703  293728 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:06:30.447811  293728 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:06:30.578928  293728 docker.go:234] disabling docker service ...
	I1206 10:06:30.579012  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:06:30.595245  293728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:06:30.608936  293728 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:06:30.732584  293728 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:06:30.854426  293728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:06:30.867755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:06:30.882294  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:06:30.891997  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:06:30.901695  293728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:06:30.901766  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:06:30.911307  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.920864  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:06:30.930280  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:06:30.939955  293728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:06:30.948517  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:06:30.957894  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:06:30.967715  293728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:06:30.977793  293728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:06:30.985557  293728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:06:30.993239  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.114748  293728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:06:31.239476  293728 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:06:31.239597  293728 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:06:31.244664  293728 start.go:564] Will wait 60s for crictl version
	I1206 10:06:31.244770  293728 ssh_runner.go:195] Run: which crictl
	I1206 10:06:31.249231  293728 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:06:31.276528  293728 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:06:31.276637  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.298790  293728 ssh_runner.go:195] Run: containerd --version
	I1206 10:06:31.323558  293728 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:06:31.326534  293728 cli_runner.go:164] Run: docker network inspect newest-cni-387337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:06:31.343556  293728 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 10:06:31.347752  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.361512  293728 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 10:06:31.364437  293728 kubeadm.go:884] updating cluster {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:06:31.364599  293728 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:06:31.364692  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.390507  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.390542  293728 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:06:31.390602  293728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:06:31.417903  293728 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:06:31.417928  293728 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:06:31.417937  293728 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 10:06:31.418044  293728 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-387337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:06:31.418117  293728 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:06:31.443849  293728 cni.go:84] Creating CNI manager for ""
	I1206 10:06:31.443876  293728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:06:31.443900  293728 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 10:06:31.443924  293728 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-387337 NodeName:newest-cni-387337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:06:31.444044  293728 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-387337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:06:31.444118  293728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:06:31.452187  293728 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:06:31.452301  293728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:06:31.460150  293728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:06:31.473854  293728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:06:31.487946  293728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 10:06:31.501615  293728 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:06:31.505530  293728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:06:31.516062  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:31.633832  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:31.655929  293728 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337 for IP: 192.168.85.2
	I1206 10:06:31.655955  293728 certs.go:195] generating shared ca certs ...
	I1206 10:06:31.655972  293728 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:31.656127  293728 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:06:31.656182  293728 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:06:31.656198  293728 certs.go:257] generating profile certs ...
	I1206 10:06:31.656306  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/client.key
	I1206 10:06:31.656372  293728 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key.0e5b75cd
	I1206 10:06:31.656419  293728 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key
	I1206 10:06:31.656536  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:06:31.656576  293728 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:06:31.656590  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:06:31.656620  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:06:31.656647  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:06:31.656675  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:06:31.656737  293728 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:06:31.657407  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:06:31.678086  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:06:31.699851  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:06:31.722100  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:06:31.743193  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:06:31.762896  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:06:31.781616  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:06:31.801280  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/newest-cni-387337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:06:31.819401  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:06:31.838552  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:06:31.856936  293728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:06:31.875547  293728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:06:31.888930  293728 ssh_runner.go:195] Run: openssl version
	I1206 10:06:31.895342  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.903529  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:06:31.911304  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915287  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.915352  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:06:31.961696  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:06:31.970315  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.981710  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:06:31.992227  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996668  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:06:31.996744  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:06:32.043296  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:06:32.051139  293728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.058979  293728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:06:32.066993  293728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071120  293728 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.071217  293728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:06:32.113955  293728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:06:32.121998  293728 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:06:32.126168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:06:32.167933  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:06:32.209594  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:06:32.252826  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:06:32.295168  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:06:32.336384  293728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:06:32.377923  293728 kubeadm.go:401] StartCluster: {Name:newest-cni-387337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-387337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:06:32.378019  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:06:32.378107  293728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:06:32.406152  293728 cri.go:89] found id: ""
	I1206 10:06:32.406224  293728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:06:32.414373  293728 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:06:32.414394  293728 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:06:32.414444  293728 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:06:32.422214  293728 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:06:32.422855  293728 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-387337" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.423179  293728 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-2448/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-387337" cluster setting kubeconfig missing "newest-cni-387337" context setting]
	I1206 10:06:32.423737  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.425135  293728 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:06:32.433653  293728 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 10:06:32.433689  293728 kubeadm.go:602] duration metric: took 19.289872ms to restartPrimaryControlPlane
	I1206 10:06:32.433699  293728 kubeadm.go:403] duration metric: took 55.791147ms to StartCluster
	I1206 10:06:32.433714  293728 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.433786  293728 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:06:32.434769  293728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:06:32.434995  293728 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:06:32.435318  293728 config.go:182] Loaded profile config "newest-cni-387337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:06:32.435370  293728 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:06:32.435471  293728 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-387337"
	I1206 10:06:32.435485  293728 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-387337"
	I1206 10:06:32.435510  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435575  293728 addons.go:70] Setting dashboard=true in profile "newest-cni-387337"
	I1206 10:06:32.435608  293728 addons.go:239] Setting addon dashboard=true in "newest-cni-387337"
	W1206 10:06:32.435630  293728 addons.go:248] addon dashboard should already be in state true
	I1206 10:06:32.435689  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.435986  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436310  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.436715  293728 addons.go:70] Setting default-storageclass=true in profile "newest-cni-387337"
	I1206 10:06:32.436742  293728 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-387337"
	I1206 10:06:32.437054  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.440794  293728 out.go:179] * Verifying Kubernetes components...
	I1206 10:06:32.443631  293728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:06:32.498221  293728 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 10:06:32.501060  293728 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 10:06:32.503631  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 10:06:32.503654  293728 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 10:06:32.503744  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.508648  293728 addons.go:239] Setting addon default-storageclass=true in "newest-cni-387337"
	I1206 10:06:32.508690  293728 host.go:66] Checking if "newest-cni-387337" exists ...
	I1206 10:06:32.509493  293728 cli_runner.go:164] Run: docker container inspect newest-cni-387337 --format={{.State.Status}}
	I1206 10:06:32.523049  293728 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:06:32.526921  293728 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.526947  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:06:32.527022  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.570818  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.571691  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.595638  293728 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.595658  293728 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:06:32.595716  293728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-387337
	I1206 10:06:32.624247  293728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/newest-cni-387337/id_rsa Username:docker}
	I1206 10:06:32.694342  293728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:06:32.746370  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 10:06:32.746390  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 10:06:32.765644  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:32.786998  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 10:06:32.787020  293728 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 10:06:32.804870  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:32.820938  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 10:06:32.821012  293728 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 10:06:32.877095  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 10:06:32.877165  293728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 10:06:32.903565  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 10:06:32.903593  293728 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 10:06:32.916625  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 10:06:32.916699  293728 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 10:06:32.930049  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 10:06:32.930072  293728 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 10:06:32.943222  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 10:06:32.943248  293728 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 10:06:32.958124  293728 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:32.958148  293728 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 10:06:32.971454  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:33.482958  293728 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:06:33.483036  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:33.483155  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483183  293728 retry.go:31] will retry after 318.519734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483231  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483244  293728 retry.go:31] will retry after 239.813026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:33.483501  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.483518  293728 retry.go:31] will retry after 128.431008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.612510  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:33.679631  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.679670  293728 retry.go:31] will retry after 494.781452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.723639  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:33.790368  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.790401  293728 retry.go:31] will retry after 373.145908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.802573  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:33.864526  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.864571  293728 retry.go:31] will retry after 555.783365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:33.983818  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.164188  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:34.174768  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:34.315072  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.315120  293728 retry.go:31] will retry after 679.653646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:34.319455  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.319548  293728 retry.go:31] will retry after 695.531102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.421513  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:34.483690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:34.487662  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.487697  293728 retry.go:31] will retry after 692.225187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:34.983561  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:34.995819  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:06:35.016010  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:35.122122  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.122225  293728 retry.go:31] will retry after 1.142566381s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.138887  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.138925  293728 retry.go:31] will retry after 649.678663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.180839  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:31.222846  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:33.722513  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:35.247363  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.247415  293728 retry.go:31] will retry after 580.881907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.483771  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:35.788736  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:35.829213  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:35.856520  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.856598  293728 retry.go:31] will retry after 1.553154314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:35.896812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.896844  293728 retry.go:31] will retry after 933.683215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:35.984035  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.265085  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:36.326884  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.326918  293728 retry.go:31] will retry after 708.086155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.484141  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:36.831542  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:36.897118  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.897156  293728 retry.go:31] will retry after 1.33074055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:36.983504  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.035538  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:37.096009  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.096042  293728 retry.go:31] will retry after 1.790090237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.410554  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:37.480541  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.480578  293728 retry.go:31] will retry after 966.279559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:37.483641  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:37.984118  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:38.228242  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:38.293907  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.293942  293728 retry.go:31] will retry after 2.616205885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.447170  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:06:38.483864  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:38.514147  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.514181  293728 retry.go:31] will retry after 2.714109668s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.886857  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:38.951997  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.952029  293728 retry.go:31] will retry after 2.462359856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:38.983614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.483264  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:39.983242  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:35.723224  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:38.222673  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:40.483248  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:40.910479  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:06:40.983819  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:40.985785  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:40.985821  293728 retry.go:31] will retry after 2.652074408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.229298  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:41.298980  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.299018  293728 retry.go:31] will retry after 3.795353676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.415143  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:41.478696  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.478758  293728 retry.go:31] will retry after 5.28721939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:41.483845  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:41.983945  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.483250  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:42.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.483241  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:43.638309  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:43.697835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.697874  293728 retry.go:31] will retry after 4.887793633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:43.983195  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.483546  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:44.983775  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.095370  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:45.192562  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:45.192602  293728 retry.go:31] will retry after 8.015655906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:06:40.722829  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:42.723326  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:45.223605  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:45.483497  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:45.984044  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.483220  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:46.766179  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:46.829923  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.829956  293728 retry.go:31] will retry after 4.667102636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:46.984011  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.483312  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:47.984058  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:48.586389  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:06:48.650814  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.650848  293728 retry.go:31] will retry after 13.339615646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:48.983299  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.483453  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:49.983414  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:47.722614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:50.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:50.483943  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:50.983588  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:51.497329  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:06:51.584226  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.584262  293728 retry.go:31] will retry after 10.765270657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:51.983783  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.484023  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:52.983169  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.208585  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:06:53.275063  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.275124  293728 retry.go:31] will retry after 12.265040886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:06:53.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:53.983886  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.483520  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:54.983246  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:52.722502  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:54.722548  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:06:55.484066  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:55.983753  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.483532  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:56.983522  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.483514  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:57.983263  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.483994  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:58.983173  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.483759  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:06:59.983187  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:06:56.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:06:58.723298  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:00.483755  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:00.984174  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.983995  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:01.991432  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:02.091463  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.091500  293728 retry.go:31] will retry after 13.890333948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.349878  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:02.411835  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.411870  293728 retry.go:31] will retry after 7.977295138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:02.483150  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:02.983902  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.483778  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:03.983278  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.483894  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:04.983934  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:01.222997  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:03.722642  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:05.483794  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:05.540834  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:05.606800  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.606832  293728 retry.go:31] will retry after 11.29369971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:05.983418  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.483507  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:06.983887  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.483439  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:07.984054  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.483236  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:08.983521  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.483231  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:09.984057  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:06.222598  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:08.222649  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:10.390061  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:10.460795  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.460828  293728 retry.go:31] will retry after 24.523063216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:10.483989  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:10.983508  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.483968  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:11.983921  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.484029  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:12.983503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.483736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:13.983533  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.483788  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:14.983198  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:10.722891  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:13.222531  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:15.223567  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:15.483180  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:15.982114  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:07:15.983591  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:16.054278  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.054318  293728 retry.go:31] will retry after 20.338606766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:16.484114  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:16.901533  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 10:07:16.984157  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:17.001960  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.001998  293728 retry.go:31] will retry after 24.827417164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:17.483261  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:17.983420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.483519  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:18.983281  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.483741  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:19.983176  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:17.722636  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:20.222572  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:20.483695  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:20.983984  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.483862  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:21.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.483812  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:22.983632  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.483796  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:23.984175  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.483235  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:24.983244  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:22.222705  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:24.723752  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:25.483633  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:25.984006  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.483830  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:26.983203  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.483211  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:27.983237  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.484156  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:28.983736  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.483880  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:29.984116  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:27.222614  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:29.223485  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:30.483549  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:30.983243  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.483786  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:31.983608  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:32.483844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:32.483952  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:32.508469  293728 cri.go:89] found id: ""
	I1206 10:07:32.508497  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.508505  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:32.508512  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:32.508574  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:32.533265  293728 cri.go:89] found id: ""
	I1206 10:07:32.533288  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.533297  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:32.533303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:32.533364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:32.562655  293728 cri.go:89] found id: ""
	I1206 10:07:32.562686  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.562695  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:32.562702  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:32.562769  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:32.587755  293728 cri.go:89] found id: ""
	I1206 10:07:32.587781  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.587789  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:32.587796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:32.587855  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:32.613253  293728 cri.go:89] found id: ""
	I1206 10:07:32.613284  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.613292  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:32.613305  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:32.613364  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:32.638621  293728 cri.go:89] found id: ""
	I1206 10:07:32.638648  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.638656  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:32.638662  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:32.638775  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:32.663624  293728 cri.go:89] found id: ""
	I1206 10:07:32.663649  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.663657  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:32.663664  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:32.663724  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:32.687850  293728 cri.go:89] found id: ""
	I1206 10:07:32.687872  293728 logs.go:282] 0 containers: []
	W1206 10:07:32.687881  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:32.687890  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:32.687901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:32.763755  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:32.763831  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:32.788174  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:32.788242  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:32.866103  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:32.857634    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.858159    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.859825    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.860421    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:32.862051    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:32.866126  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:32.866138  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:32.891711  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:32.891745  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:34.985041  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:07:35.094954  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:35.094988  293728 retry.go:31] will retry after 34.21540436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:07:31.722556  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:33.722685  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:35.421586  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:35.432096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:35.432164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:35.457419  293728 cri.go:89] found id: ""
	I1206 10:07:35.457442  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.457451  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:35.457457  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:35.457520  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:35.481490  293728 cri.go:89] found id: ""
	I1206 10:07:35.481513  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.481521  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:35.481527  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:35.481586  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:35.506409  293728 cri.go:89] found id: ""
	I1206 10:07:35.506432  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.506441  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:35.506447  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:35.506512  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:35.534896  293728 cri.go:89] found id: ""
	I1206 10:07:35.534923  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.534932  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:35.534939  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:35.534997  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:35.560020  293728 cri.go:89] found id: ""
	I1206 10:07:35.560043  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.560052  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:35.560058  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:35.560115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:35.584963  293728 cri.go:89] found id: ""
	I1206 10:07:35.585028  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.585042  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:35.585049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:35.585110  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:35.617464  293728 cri.go:89] found id: ""
	I1206 10:07:35.617487  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.617495  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:35.617501  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:35.617562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:35.642187  293728 cri.go:89] found id: ""
	I1206 10:07:35.642219  293728 logs.go:282] 0 containers: []
	W1206 10:07:35.642228  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:35.642238  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:35.642250  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:35.655709  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:35.655738  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:35.728266  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:35.714434    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.715121    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.716831    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.717292    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:35.718947    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:35.728336  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:35.728379  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:35.766222  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:35.766301  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:35.823000  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:35.823024  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:36.393185  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:07:36.458951  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:36.458990  293728 retry.go:31] will retry after 24.220809087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:38.379270  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:38.389923  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:38.389993  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:38.416450  293728 cri.go:89] found id: ""
	I1206 10:07:38.416517  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.416540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:38.416558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:38.416635  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:38.442635  293728 cri.go:89] found id: ""
	I1206 10:07:38.442663  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.442672  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:38.442680  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:38.442742  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:38.469797  293728 cri.go:89] found id: ""
	I1206 10:07:38.469824  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.469834  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:38.469840  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:38.469899  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:38.497073  293728 cri.go:89] found id: ""
	I1206 10:07:38.497098  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.497107  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:38.497113  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:38.497194  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:38.527432  293728 cri.go:89] found id: ""
	I1206 10:07:38.527465  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.527474  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:38.527481  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:38.527540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:38.554253  293728 cri.go:89] found id: ""
	I1206 10:07:38.554278  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.554290  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:38.554300  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:38.554368  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:38.580022  293728 cri.go:89] found id: ""
	I1206 10:07:38.580070  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.580080  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:38.580087  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:38.580165  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:38.604967  293728 cri.go:89] found id: ""
	I1206 10:07:38.604992  293728 logs.go:282] 0 containers: []
	W1206 10:07:38.605001  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:38.605010  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:38.605041  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:38.672012  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:38.663132    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.663961    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.665865    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.666410    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:38.668022    2101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:38.672044  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:38.672075  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:38.697533  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:38.697567  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:38.750151  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:38.750176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:38.835463  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:38.835500  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:07:35.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:38.222743  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:41.350690  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:41.361865  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:41.361934  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:41.387755  293728 cri.go:89] found id: ""
	I1206 10:07:41.387781  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.387789  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:41.387796  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:41.387854  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:41.412482  293728 cri.go:89] found id: ""
	I1206 10:07:41.412510  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.412519  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:41.412526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:41.412591  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:41.437604  293728 cri.go:89] found id: ""
	I1206 10:07:41.437635  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.437644  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:41.437650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:41.437722  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:41.462503  293728 cri.go:89] found id: ""
	I1206 10:07:41.462573  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.462597  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:41.462616  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:41.462703  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:41.487720  293728 cri.go:89] found id: ""
	I1206 10:07:41.487742  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.487750  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:41.487757  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:41.487819  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:41.513291  293728 cri.go:89] found id: ""
	I1206 10:07:41.513321  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.513332  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:41.513342  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:41.513420  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:41.547109  293728 cri.go:89] found id: ""
	I1206 10:07:41.547132  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.547141  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:41.547147  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:41.547209  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:41.572514  293728 cri.go:89] found id: ""
	I1206 10:07:41.572585  293728 logs.go:282] 0 containers: []
	W1206 10:07:41.572607  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:41.572628  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:41.572669  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:41.629345  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:41.629378  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:41.643897  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:41.643928  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:41.713946  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:41.705234    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.705673    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.707580    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.708362    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:41.710158    2215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:41.714006  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:41.714025  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:41.745589  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:41.745645  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:41.830134  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:07:41.893553  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:41.893593  293728 retry.go:31] will retry after 44.351115962s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:07:44.324517  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:44.335432  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:44.335507  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:44.365594  293728 cri.go:89] found id: ""
	I1206 10:07:44.365621  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.365630  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:44.365637  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:44.365723  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:44.390876  293728 cri.go:89] found id: ""
	I1206 10:07:44.390909  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.390919  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:44.390944  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:44.391026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:44.421424  293728 cri.go:89] found id: ""
	I1206 10:07:44.421448  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.421462  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:44.421468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:44.421525  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:44.445299  293728 cri.go:89] found id: ""
	I1206 10:07:44.445325  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.445335  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:44.445341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:44.445454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:44.473977  293728 cri.go:89] found id: ""
	I1206 10:07:44.473999  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.474008  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:44.474014  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:44.474072  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:44.501273  293728 cri.go:89] found id: ""
	I1206 10:07:44.501299  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.501308  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:44.501341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:44.501415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:44.525106  293728 cri.go:89] found id: ""
	I1206 10:07:44.525136  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.525154  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:44.525161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:44.525223  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:44.550546  293728 cri.go:89] found id: ""
	I1206 10:07:44.550571  293728 logs.go:282] 0 containers: []
	W1206 10:07:44.550580  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:44.550589  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:44.550600  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:44.615941  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:44.607694    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.608515    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610041    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.610630    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:44.612121    2328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:44.615962  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:44.615975  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:44.641346  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:44.641377  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:44.669493  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:44.669520  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:44.727196  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:44.727357  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:07:40.722832  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:43.222679  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:45.222775  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:47.260652  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:47.271164  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:47.271238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:47.295481  293728 cri.go:89] found id: ""
	I1206 10:07:47.295506  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.295515  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:47.295521  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:47.295581  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:47.321861  293728 cri.go:89] found id: ""
	I1206 10:07:47.321884  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.321892  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:47.321898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:47.321954  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:47.346071  293728 cri.go:89] found id: ""
	I1206 10:07:47.346094  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.346103  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:47.346110  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:47.346169  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:47.373210  293728 cri.go:89] found id: ""
	I1206 10:07:47.373234  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.373242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:47.373249  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:47.373312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:47.403706  293728 cri.go:89] found id: ""
	I1206 10:07:47.403729  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.403739  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:47.403745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:47.403810  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:47.433807  293728 cri.go:89] found id: ""
	I1206 10:07:47.433831  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.433840  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:47.433847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:47.433904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:47.462210  293728 cri.go:89] found id: ""
	I1206 10:07:47.462233  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.462241  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:47.462247  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:47.462308  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:47.486445  293728 cri.go:89] found id: ""
	I1206 10:07:47.486523  293728 logs.go:282] 0 containers: []
	W1206 10:07:47.486546  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:47.486567  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:47.486597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:47.500083  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:47.500114  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:47.568637  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:47.558715    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.559476    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561148    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.561466    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:47.564516    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:47.568661  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:47.568683  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:47.598178  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:47.598213  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:47.629224  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:47.629249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.187574  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 10:07:47.727856  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:50.223331  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:50.198529  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:50.198609  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:50.224708  293728 cri.go:89] found id: ""
	I1206 10:07:50.224731  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.224738  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:50.224744  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:50.224806  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:50.253337  293728 cri.go:89] found id: ""
	I1206 10:07:50.253361  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.253370  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:50.253376  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:50.253433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:50.278723  293728 cri.go:89] found id: ""
	I1206 10:07:50.278750  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.278759  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:50.278766  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:50.278830  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:50.308736  293728 cri.go:89] found id: ""
	I1206 10:07:50.308803  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.308822  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:50.308834  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:50.308894  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:50.333136  293728 cri.go:89] found id: ""
	I1206 10:07:50.333162  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.333171  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:50.333177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:50.333263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:50.358071  293728 cri.go:89] found id: ""
	I1206 10:07:50.358105  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.358114  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:50.358137  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:50.358215  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:50.382078  293728 cri.go:89] found id: ""
	I1206 10:07:50.382111  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.382120  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:50.382141  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:50.382222  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:50.407225  293728 cri.go:89] found id: ""
	I1206 10:07:50.407261  293728 logs.go:282] 0 containers: []
	W1206 10:07:50.407270  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:50.407279  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:50.407291  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:50.466553  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:50.466588  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:50.480420  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:50.480450  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:50.546503  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:50.538132    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.538890    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.540463    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.541036    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:50.542600    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:50.546523  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:50.546546  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:50.573208  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:50.573243  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.100604  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:53.111611  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:53.111683  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:53.136465  293728 cri.go:89] found id: ""
	I1206 10:07:53.136494  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.136503  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:53.136510  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:53.136584  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:53.167397  293728 cri.go:89] found id: ""
	I1206 10:07:53.167419  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.167427  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:53.167433  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:53.167501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:53.191735  293728 cri.go:89] found id: ""
	I1206 10:07:53.191769  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.191778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:53.191784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:53.191849  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:53.216472  293728 cri.go:89] found id: ""
	I1206 10:07:53.216495  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.216506  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:53.216513  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:53.216570  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:53.242936  293728 cri.go:89] found id: ""
	I1206 10:07:53.242957  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.242966  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:53.242972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:53.243035  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:53.274015  293728 cri.go:89] found id: ""
	I1206 10:07:53.274041  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.274050  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:53.274056  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:53.274118  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:53.303348  293728 cri.go:89] found id: ""
	I1206 10:07:53.303371  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.303415  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:53.303422  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:53.303486  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:53.332691  293728 cri.go:89] found id: ""
	I1206 10:07:53.332716  293728 logs.go:282] 0 containers: []
	W1206 10:07:53.332724  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:53.332733  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:53.332749  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:53.346274  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:53.346303  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:53.412178  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:53.403243    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.404038    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.405704    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.406009    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:53.408013    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:53.412203  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:53.412216  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:53.437974  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:53.438008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:53.469789  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:53.469816  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:07:52.723301  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:55.222438  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:07:56.029614  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:56.044312  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:56.044385  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:56.074035  293728 cri.go:89] found id: ""
	I1206 10:07:56.074061  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.074071  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:56.074077  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:56.074137  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:56.101362  293728 cri.go:89] found id: ""
	I1206 10:07:56.101387  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.101397  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:56.101403  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:56.101472  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:56.132837  293728 cri.go:89] found id: ""
	I1206 10:07:56.132867  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.132876  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:56.132882  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:56.132949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:56.162095  293728 cri.go:89] found id: ""
	I1206 10:07:56.162121  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.162129  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:56.162136  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:56.162195  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:56.190088  293728 cri.go:89] found id: ""
	I1206 10:07:56.190113  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.190122  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:56.190128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:56.190188  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:56.217327  293728 cri.go:89] found id: ""
	I1206 10:07:56.217355  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.217365  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:56.217372  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:56.217432  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:56.242210  293728 cri.go:89] found id: ""
	I1206 10:07:56.242246  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.242255  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:56.242261  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:56.242330  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:56.266843  293728 cri.go:89] found id: ""
	I1206 10:07:56.266871  293728 logs.go:282] 0 containers: []
	W1206 10:07:56.266879  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:56.266888  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:56.266900  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:07:56.324906  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:56.324941  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:56.339074  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:56.339111  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:56.407395  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:56.398763    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.399992    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.400889    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.401941    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:56.403601    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:56.407417  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:56.407434  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:56.433408  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:56.433442  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:58.962420  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:07:58.984606  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:07:58.984688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:07:59.037604  293728 cri.go:89] found id: ""
	I1206 10:07:59.037795  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.038054  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:07:59.038096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:07:59.038236  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:07:59.074512  293728 cri.go:89] found id: ""
	I1206 10:07:59.074555  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.074564  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:07:59.074571  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:07:59.074638  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:07:59.101868  293728 cri.go:89] found id: ""
	I1206 10:07:59.101895  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.101904  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:07:59.101910  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:07:59.101973  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:07:59.127188  293728 cri.go:89] found id: ""
	I1206 10:07:59.127214  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.127223  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:07:59.127230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:07:59.127286  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:07:59.152234  293728 cri.go:89] found id: ""
	I1206 10:07:59.152259  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.152268  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:07:59.152274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:07:59.152342  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:07:59.177629  293728 cri.go:89] found id: ""
	I1206 10:07:59.177654  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.177663  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:07:59.177670  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:07:59.177728  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:07:59.202156  293728 cri.go:89] found id: ""
	I1206 10:07:59.202185  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.202195  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:07:59.202201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:07:59.202261  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:07:59.227130  293728 cri.go:89] found id: ""
	I1206 10:07:59.227165  293728 logs.go:282] 0 containers: []
	W1206 10:07:59.227174  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:07:59.227183  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:07:59.227204  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:07:59.241522  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:07:59.241597  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:07:59.311704  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:07:59.302465    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.302959    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.304730    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.305205    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:07:59.306765    2906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:07:59.311730  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:07:59.311742  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:07:59.337213  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:07:59.337246  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:07:59.365911  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:07:59.365940  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:07:57.222678  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:07:59.223226  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:00.680788  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 10:08:00.745958  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:00.746077  293728 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:01.925540  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:01.936468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:01.936592  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:01.965164  293728 cri.go:89] found id: ""
	I1206 10:08:01.965242  293728 logs.go:282] 0 containers: []
	W1206 10:08:01.965277  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:01.965302  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:01.965393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:02.013736  293728 cri.go:89] found id: ""
	I1206 10:08:02.013774  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.013783  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:02.013790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:02.013862  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:02.058535  293728 cri.go:89] found id: ""
	I1206 10:08:02.058627  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.058651  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:02.058685  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:02.058798  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:02.091149  293728 cri.go:89] found id: ""
	I1206 10:08:02.091213  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.091242  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:02.091286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:02.091460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:02.116844  293728 cri.go:89] found id: ""
	I1206 10:08:02.116870  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.116878  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:02.116884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:02.116945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:02.143338  293728 cri.go:89] found id: ""
	I1206 10:08:02.143439  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.143463  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:02.143485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:02.143573  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:02.169310  293728 cri.go:89] found id: ""
	I1206 10:08:02.169333  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.169342  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:02.169348  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:02.169410  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:02.200025  293728 cri.go:89] found id: ""
	I1206 10:08:02.200096  293728 logs.go:282] 0 containers: []
	W1206 10:08:02.200104  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:02.200113  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:02.200125  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:02.257304  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:02.257340  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:02.271507  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:02.271541  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:02.341058  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:02.331854    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.332684    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334338    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.334769    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:02.336486    3029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:02.341084  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:02.341097  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:02.367636  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:02.367672  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:04.899503  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:04.910154  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:04.910231  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:04.934598  293728 cri.go:89] found id: ""
	I1206 10:08:04.934623  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.934632  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:04.934638  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:04.934699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:04.959971  293728 cri.go:89] found id: ""
	I1206 10:08:04.959995  293728 logs.go:282] 0 containers: []
	W1206 10:08:04.960004  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:04.960010  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:04.960071  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:05.027645  293728 cri.go:89] found id: ""
	I1206 10:08:05.027668  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.027677  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:05.027683  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:05.027758  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:05.077828  293728 cri.go:89] found id: ""
	I1206 10:08:05.077868  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.077878  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:05.077884  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:05.077946  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:05.103986  293728 cri.go:89] found id: ""
	I1206 10:08:05.104014  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.104023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:05.104029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:05.104091  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:05.129703  293728 cri.go:89] found id: ""
	I1206 10:08:05.129778  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.129822  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:05.129843  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:05.129930  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:05.156958  293728 cri.go:89] found id: ""
	I1206 10:08:05.156982  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.156990  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:05.156996  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:05.157058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:05.182537  293728 cri.go:89] found id: ""
	I1206 10:08:05.182565  293728 logs.go:282] 0 containers: []
	W1206 10:08:05.182575  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:05.182585  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:05.182598  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:08:01.722650  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:04.222533  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:05.196389  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:05.196419  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:05.262239  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:05.253199    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.253990    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.255826    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.256391    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:05.257908    3144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:05.262265  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:05.262278  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:05.288138  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:05.288178  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:05.316468  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:05.316497  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:07.872986  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:07.886594  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:07.886666  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:07.912554  293728 cri.go:89] found id: ""
	I1206 10:08:07.912580  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.912589  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:07.912595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:07.912668  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:07.938006  293728 cri.go:89] found id: ""
	I1206 10:08:07.938033  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.938042  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:07.938049  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:07.938107  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:07.967969  293728 cri.go:89] found id: ""
	I1206 10:08:07.967995  293728 logs.go:282] 0 containers: []
	W1206 10:08:07.968004  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:07.968011  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:07.968079  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:08.001472  293728 cri.go:89] found id: ""
	I1206 10:08:08.001495  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.001504  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:08.001511  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:08.001577  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:08.064509  293728 cri.go:89] found id: ""
	I1206 10:08:08.064538  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.064547  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:08.064554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:08.064612  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:08.094308  293728 cri.go:89] found id: ""
	I1206 10:08:08.094376  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.094402  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:08.094434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:08.094522  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:08.124650  293728 cri.go:89] found id: ""
	I1206 10:08:08.124695  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.124705  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:08.124712  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:08.124782  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:08.150816  293728 cri.go:89] found id: ""
	I1206 10:08:08.150851  293728 logs.go:282] 0 containers: []
	W1206 10:08:08.150860  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:08.150868  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:08.150879  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:08.207170  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:08.207203  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:08.220834  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:08.220860  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:08.285113  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:08.276678    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.277616    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279172    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.279585    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:08.281070    3256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:08.285138  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:08.285153  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:08.311342  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:08.311548  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:09.310714  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 10:08:09.371609  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:09.371709  293728 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 10:08:06.222644  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:08.722561  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:10.840228  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:10.850847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:10.850914  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:10.881439  293728 cri.go:89] found id: ""
	I1206 10:08:10.881517  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.881540  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:10.881555  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:10.881629  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:10.910942  293728 cri.go:89] found id: ""
	I1206 10:08:10.910971  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.910980  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:10.910987  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:10.911049  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:10.936471  293728 cri.go:89] found id: ""
	I1206 10:08:10.936495  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.936503  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:10.936509  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:10.936566  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:10.964540  293728 cri.go:89] found id: ""
	I1206 10:08:10.964567  293728 logs.go:282] 0 containers: []
	W1206 10:08:10.964575  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:10.964581  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:10.964650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:11.035295  293728 cri.go:89] found id: ""
	I1206 10:08:11.035322  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.035332  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:11.035354  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:11.035433  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:11.081240  293728 cri.go:89] found id: ""
	I1206 10:08:11.081266  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.081275  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:11.081282  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:11.081347  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:11.109502  293728 cri.go:89] found id: ""
	I1206 10:08:11.109543  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.109554  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:11.109561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:11.109625  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:11.138072  293728 cri.go:89] found id: ""
	I1206 10:08:11.138100  293728 logs.go:282] 0 containers: []
	W1206 10:08:11.138113  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:11.138122  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:11.138134  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:11.207996  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:11.198639    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.199998    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202044    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.202743    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:11.203981    3367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:11.208060  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:11.208081  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:11.234490  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:11.234525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:11.263495  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:11.263525  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:11.323991  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:11.324034  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:13.838014  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:13.849112  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:13.849181  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:13.873403  293728 cri.go:89] found id: ""
	I1206 10:08:13.873472  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.873498  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:13.873515  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:13.873602  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:13.900596  293728 cri.go:89] found id: ""
	I1206 10:08:13.900616  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.900625  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:13.900631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:13.900694  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:13.925385  293728 cri.go:89] found id: ""
	I1206 10:08:13.925409  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.925417  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:13.925424  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:13.925481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:13.950796  293728 cri.go:89] found id: ""
	I1206 10:08:13.950823  293728 logs.go:282] 0 containers: []
	W1206 10:08:13.950837  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:13.950844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:13.950902  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:14.028934  293728 cri.go:89] found id: ""
	I1206 10:08:14.028964  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.028973  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:14.028979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:14.029058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:14.063925  293728 cri.go:89] found id: ""
	I1206 10:08:14.063948  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.063957  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:14.063963  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:14.064024  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:14.091439  293728 cri.go:89] found id: ""
	I1206 10:08:14.091465  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.091473  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:14.091480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:14.091556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:14.116453  293728 cri.go:89] found id: ""
	I1206 10:08:14.116476  293728 logs.go:282] 0 containers: []
	W1206 10:08:14.116485  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:14.116494  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:14.116506  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:14.173576  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:14.173615  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:14.187707  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:14.187736  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:14.256417  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:14.248355    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.248830    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250365    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.250850    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:14.252318    3491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:14.256440  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:14.256452  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:14.281458  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:14.281490  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:10.722908  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:13.223465  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:16.809300  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:16.820406  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:16.820481  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:16.845040  293728 cri.go:89] found id: ""
	I1206 10:08:16.845105  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.845130  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:16.845144  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:16.845217  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:16.875450  293728 cri.go:89] found id: ""
	I1206 10:08:16.875475  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.875484  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:16.875500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:16.875562  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:16.902002  293728 cri.go:89] found id: ""
	I1206 10:08:16.902048  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.902059  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:16.902068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:16.902146  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:16.927319  293728 cri.go:89] found id: ""
	I1206 10:08:16.927353  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.927361  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:16.927368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:16.927466  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:16.952239  293728 cri.go:89] found id: ""
	I1206 10:08:16.952265  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.952273  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:16.952280  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:16.952386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:16.994322  293728 cri.go:89] found id: ""
	I1206 10:08:16.994351  293728 logs.go:282] 0 containers: []
	W1206 10:08:16.994360  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:16.994368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:16.994437  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:17.032079  293728 cri.go:89] found id: ""
	I1206 10:08:17.032113  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.032122  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:17.032128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:17.032201  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:17.079256  293728 cri.go:89] found id: ""
	I1206 10:08:17.079321  293728 logs.go:282] 0 containers: []
	W1206 10:08:17.079343  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:17.079364  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:17.079406  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:17.104677  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:17.104707  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:17.136676  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:17.136701  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:17.195915  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:17.195950  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:17.209626  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:17.209653  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:17.278745  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:17.269101    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.269734    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271307    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.271892    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:17.273910    3616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:19.780767  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:19.791658  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:19.791756  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:19.820516  293728 cri.go:89] found id: ""
	I1206 10:08:19.820539  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.820547  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:19.820554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:19.820652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:19.845473  293728 cri.go:89] found id: ""
	I1206 10:08:19.845499  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.845507  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:19.845514  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:19.845572  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:19.871555  293728 cri.go:89] found id: ""
	I1206 10:08:19.871580  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.871592  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:19.871598  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:19.871658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:19.902754  293728 cri.go:89] found id: ""
	I1206 10:08:19.902778  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.902787  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:19.902793  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:19.902853  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:19.927447  293728 cri.go:89] found id: ""
	I1206 10:08:19.927473  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.927482  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:19.927489  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:19.927549  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:19.951607  293728 cri.go:89] found id: ""
	I1206 10:08:19.951634  293728 logs.go:282] 0 containers: []
	W1206 10:08:19.951644  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:19.951651  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:19.951718  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:20.023839  293728 cri.go:89] found id: ""
	I1206 10:08:20.023868  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.023879  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:20.023886  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:20.023951  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:20.064702  293728 cri.go:89] found id: ""
	I1206 10:08:20.064730  293728 logs.go:282] 0 containers: []
	W1206 10:08:20.064739  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:20.064748  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:20.064761  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:20.131531  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:20.121981    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.122773    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.124609    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.125239    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:20.126941    3711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:20.131555  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:20.131566  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:20.157955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:20.157991  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:20.188100  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:20.188126  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:15.723287  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:18.223318  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:20.248399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:20.248437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:22.762476  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:22.774338  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:22.774408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:22.803197  293728 cri.go:89] found id: ""
	I1206 10:08:22.803220  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.803228  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:22.803234  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:22.803292  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:22.828985  293728 cri.go:89] found id: ""
	I1206 10:08:22.829009  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.829018  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:22.829024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:22.829084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:22.857670  293728 cri.go:89] found id: ""
	I1206 10:08:22.857695  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.857704  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:22.857710  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:22.857770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:22.886863  293728 cri.go:89] found id: ""
	I1206 10:08:22.886889  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.886898  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:22.886905  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:22.886967  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:22.912046  293728 cri.go:89] found id: ""
	I1206 10:08:22.912072  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.912080  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:22.912086  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:22.912149  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:22.940438  293728 cri.go:89] found id: ""
	I1206 10:08:22.940516  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.940530  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:22.940538  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:22.940597  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:22.965932  293728 cri.go:89] found id: ""
	I1206 10:08:22.965957  293728 logs.go:282] 0 containers: []
	W1206 10:08:22.965966  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:22.965973  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:22.966034  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:23.036167  293728 cri.go:89] found id: ""
	I1206 10:08:23.036194  293728 logs.go:282] 0 containers: []
	W1206 10:08:23.036203  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:23.036212  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:23.036224  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:23.054454  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:23.054481  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:23.120660  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:23.111552    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.112328    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114040    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.114610    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:23.116286    3831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:23.120680  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:23.120692  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:23.146879  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:23.146913  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:23.177356  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:23.177389  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:20.722592  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:23.222550  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:25.739842  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:25.751155  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:25.751238  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:25.781790  293728 cri.go:89] found id: ""
	I1206 10:08:25.781813  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.781821  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:25.781828  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:25.781884  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:25.809915  293728 cri.go:89] found id: ""
	I1206 10:08:25.809940  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.809948  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:25.809954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:25.810014  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:25.840293  293728 cri.go:89] found id: ""
	I1206 10:08:25.840318  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.840327  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:25.840334  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:25.840390  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:25.869368  293728 cri.go:89] found id: ""
	I1206 10:08:25.869401  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.869410  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:25.869416  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:25.869488  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:25.898302  293728 cri.go:89] found id: ""
	I1206 10:08:25.898335  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.898344  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:25.898351  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:25.898417  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:25.925837  293728 cri.go:89] found id: ""
	I1206 10:08:25.925864  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.925873  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:25.925880  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:25.925940  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:25.950501  293728 cri.go:89] found id: ""
	I1206 10:08:25.950537  293728 logs.go:282] 0 containers: []
	W1206 10:08:25.950546  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:25.950552  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:25.950618  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:26.003264  293728 cri.go:89] found id: ""
	I1206 10:08:26.003294  293728 logs.go:282] 0 containers: []
	W1206 10:08:26.003305  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:26.003316  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:26.003327  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:26.046472  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:26.046503  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:26.091770  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:26.091798  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:26.148719  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:26.148755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:26.165689  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:26.165733  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:26.231230  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:26.222354    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.223218    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.224969    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.225558    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:26.227223    3955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:26.245490  293728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 10:08:26.310812  293728 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:08:26.310914  293728 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:08:26.314238  293728 out.go:179] * Enabled addons: 
	I1206 10:08:26.317143  293728 addons.go:530] duration metric: took 1m53.881766525s for enable addons: enabled=[]
	I1206 10:08:28.731518  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:28.742380  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:28.742460  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:28.768392  293728 cri.go:89] found id: ""
	I1206 10:08:28.768416  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.768425  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:28.768431  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:28.768489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:28.795017  293728 cri.go:89] found id: ""
	I1206 10:08:28.795043  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.795052  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:28.795059  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:28.795130  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:28.831707  293728 cri.go:89] found id: ""
	I1206 10:08:28.831734  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.831742  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:28.831748  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:28.831807  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:28.857267  293728 cri.go:89] found id: ""
	I1206 10:08:28.857293  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.857304  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:28.857317  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:28.857415  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:28.887732  293728 cri.go:89] found id: ""
	I1206 10:08:28.887754  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.887762  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:28.887769  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:28.887827  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:28.912905  293728 cri.go:89] found id: ""
	I1206 10:08:28.912970  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.912984  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:28.912992  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:28.913051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:28.937740  293728 cri.go:89] found id: ""
	I1206 10:08:28.937764  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.937774  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:28.937781  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:28.937840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:28.964042  293728 cri.go:89] found id: ""
	I1206 10:08:28.964111  293728 logs.go:282] 0 containers: []
	W1206 10:08:28.964126  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:28.964135  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:28.964147  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:29.034399  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:29.034439  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:29.059150  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:29.059176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:29.134200  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:29.125269    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.126061    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.127729    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.128388    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:29.130079    4063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:29.134222  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:29.134235  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:29.160868  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:29.160901  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:25.722683  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:27.723593  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:30.222645  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:31.689201  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:31.700497  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:31.700569  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:31.726402  293728 cri.go:89] found id: ""
	I1206 10:08:31.726426  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.726434  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:31.726441  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:31.726503  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:31.752620  293728 cri.go:89] found id: ""
	I1206 10:08:31.752644  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.752652  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:31.752659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:31.752720  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:31.778722  293728 cri.go:89] found id: ""
	I1206 10:08:31.778749  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.778758  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:31.778764  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:31.778825  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:31.804730  293728 cri.go:89] found id: ""
	I1206 10:08:31.804754  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.804762  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:31.804768  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:31.804828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:31.834276  293728 cri.go:89] found id: ""
	I1206 10:08:31.834303  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.834312  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:31.834322  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:31.834388  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:31.859721  293728 cri.go:89] found id: ""
	I1206 10:08:31.859744  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.859752  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:31.859759  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:31.859889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:31.888679  293728 cri.go:89] found id: ""
	I1206 10:08:31.888746  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.888760  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:31.888767  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:31.888828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:31.915769  293728 cri.go:89] found id: ""
	I1206 10:08:31.915794  293728 logs.go:282] 0 containers: []
	W1206 10:08:31.915804  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:31.915812  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:31.915825  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:31.929129  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:31.929155  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:32.017380  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:31.999265    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.000314    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004340    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.004746    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:32.008097    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:32.017406  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:32.017420  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:32.046135  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:32.046218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:32.081462  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:32.081485  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.642406  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:34.653187  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:34.653263  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:34.683091  293728 cri.go:89] found id: ""
	I1206 10:08:34.683116  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.683124  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:34.683130  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:34.683189  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:34.709426  293728 cri.go:89] found id: ""
	I1206 10:08:34.709453  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.709462  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:34.709468  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:34.709528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:34.740189  293728 cri.go:89] found id: ""
	I1206 10:08:34.740215  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.740223  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:34.740230  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:34.740289  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:34.769902  293728 cri.go:89] found id: ""
	I1206 10:08:34.769932  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.769942  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:34.769954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:34.770026  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:34.797331  293728 cri.go:89] found id: ""
	I1206 10:08:34.797358  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.797367  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:34.797374  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:34.797434  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:34.823286  293728 cri.go:89] found id: ""
	I1206 10:08:34.823309  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.823318  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:34.823324  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:34.823406  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:34.849130  293728 cri.go:89] found id: ""
	I1206 10:08:34.849153  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.849162  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:34.849168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:34.849229  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:34.873883  293728 cri.go:89] found id: ""
	I1206 10:08:34.873905  293728 logs.go:282] 0 containers: []
	W1206 10:08:34.873913  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:34.873922  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:34.873933  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:34.929942  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:34.929976  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:34.944124  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:34.944205  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:35.057155  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:35.041792    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043038    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.043755    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.049366    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:35.050091    4290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:35.057180  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:35.057193  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:35.090699  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:35.090741  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:32.223260  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:34.723506  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:37.620713  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:37.631409  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:37.631478  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:37.668926  293728 cri.go:89] found id: ""
	I1206 10:08:37.668949  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.668958  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:37.668966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:37.669025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:37.698809  293728 cri.go:89] found id: ""
	I1206 10:08:37.698831  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.698840  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:37.698846  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:37.698905  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:37.726123  293728 cri.go:89] found id: ""
	I1206 10:08:37.726146  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.726155  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:37.726161  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:37.726219  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:37.750745  293728 cri.go:89] found id: ""
	I1206 10:08:37.750818  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.750842  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:37.750861  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:37.750945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:37.777744  293728 cri.go:89] found id: ""
	I1206 10:08:37.777814  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.777837  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:37.777857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:37.777945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:37.804124  293728 cri.go:89] found id: ""
	I1206 10:08:37.804151  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.804160  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:37.804166  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:37.804243  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:37.828930  293728 cri.go:89] found id: ""
	I1206 10:08:37.828995  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.829010  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:37.829017  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:37.829076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:37.853436  293728 cri.go:89] found id: ""
	I1206 10:08:37.853459  293728 logs.go:282] 0 containers: []
	W1206 10:08:37.853468  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:37.853476  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:37.853493  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:37.910673  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:37.910709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:37.926464  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:37.926504  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:38.046192  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:38.019476    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.031978    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.032900    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037073    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:38.037736    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:38.046217  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:38.046230  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:38.078770  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:38.078805  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:37.222544  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:39.222587  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:40.613605  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:40.624180  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:40.624256  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:40.648680  293728 cri.go:89] found id: ""
	I1206 10:08:40.648706  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.648715  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:40.648721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:40.648783  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:40.674691  293728 cri.go:89] found id: ""
	I1206 10:08:40.674716  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.674725  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:40.674732  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:40.674802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:40.700970  293728 cri.go:89] found id: ""
	I1206 10:08:40.700997  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.701006  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:40.701013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:40.701076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:40.729911  293728 cri.go:89] found id: ""
	I1206 10:08:40.729940  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.729949  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:40.729956  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:40.730020  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:40.755581  293728 cri.go:89] found id: ""
	I1206 10:08:40.755611  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.755620  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:40.755626  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:40.755686  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:40.781938  293728 cri.go:89] found id: ""
	I1206 10:08:40.782007  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.782030  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:40.782051  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:40.782139  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:40.811855  293728 cri.go:89] found id: ""
	I1206 10:08:40.811880  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.811889  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:40.811895  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:40.811961  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:40.841527  293728 cri.go:89] found id: ""
	I1206 10:08:40.841553  293728 logs.go:282] 0 containers: []
	W1206 10:08:40.841562  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:40.841571  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:40.841583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:40.854956  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:40.854983  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:40.924783  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:40.916653    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.917278    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.918774    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.919183    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:40.920651    4519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:40.924807  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:40.924823  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:40.950611  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:40.950646  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:41.021978  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:41.022008  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:43.596447  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:43.607463  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:43.607540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:43.632638  293728 cri.go:89] found id: ""
	I1206 10:08:43.632660  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.632668  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:43.632675  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:43.632737  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:43.657538  293728 cri.go:89] found id: ""
	I1206 10:08:43.657616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.657632  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:43.657639  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:43.657711  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:43.683595  293728 cri.go:89] found id: ""
	I1206 10:08:43.683621  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.683630  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:43.683636  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:43.683706  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:43.709348  293728 cri.go:89] found id: ""
	I1206 10:08:43.709371  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.709380  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:43.709387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:43.709451  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:43.734592  293728 cri.go:89] found id: ""
	I1206 10:08:43.734616  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.734625  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:43.734631  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:43.734689  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:43.761297  293728 cri.go:89] found id: ""
	I1206 10:08:43.761362  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.761387  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:43.761405  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:43.761493  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:43.789795  293728 cri.go:89] found id: ""
	I1206 10:08:43.789831  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.789840  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:43.789847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:43.789919  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:43.817708  293728 cri.go:89] found id: ""
	I1206 10:08:43.817735  293728 logs.go:282] 0 containers: []
	W1206 10:08:43.817744  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:43.817762  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:43.817774  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:43.831448  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:43.831483  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:43.897033  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:43.888843    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.889730    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891528    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.891839    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:43.893322    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:43.897107  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:43.897131  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:43.922955  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:43.922990  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:43.960423  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:43.960457  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:41.722543  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:43.723229  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:46.534389  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:46.545120  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:46.545205  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:46.570287  293728 cri.go:89] found id: ""
	I1206 10:08:46.570313  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.570322  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:46.570328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:46.570391  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:46.600524  293728 cri.go:89] found id: ""
	I1206 10:08:46.600609  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.600631  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:46.600650  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:46.600734  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:46.627292  293728 cri.go:89] found id: ""
	I1206 10:08:46.627314  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.627322  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:46.627328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:46.627424  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:46.652620  293728 cri.go:89] found id: ""
	I1206 10:08:46.652642  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.652651  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:46.652657  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:46.652716  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:46.681992  293728 cri.go:89] found id: ""
	I1206 10:08:46.682015  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.682023  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:46.682029  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:46.682087  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:46.708290  293728 cri.go:89] found id: ""
	I1206 10:08:46.708363  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.708408  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:46.708434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:46.708528  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:46.737816  293728 cri.go:89] found id: ""
	I1206 10:08:46.737890  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.737915  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:46.737935  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:46.738021  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:46.768334  293728 cri.go:89] found id: ""
	I1206 10:08:46.768407  293728 logs.go:282] 0 containers: []
	W1206 10:08:46.768430  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:46.768451  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:46.768491  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:46.782268  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:46.782344  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:46.850687  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:46.840824    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.841622    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.843626    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.844354    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:46.846055    4742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:46.850714  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:46.850727  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:46.877310  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:46.877362  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:46.909345  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:46.909376  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:49.467346  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:49.477899  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:49.477971  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:49.502546  293728 cri.go:89] found id: ""
	I1206 10:08:49.502569  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.502578  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:49.502584  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:49.502646  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:49.527592  293728 cri.go:89] found id: ""
	I1206 10:08:49.527663  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.527686  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:49.527699  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:49.527760  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:49.553748  293728 cri.go:89] found id: ""
	I1206 10:08:49.553770  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.553778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:49.553784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:49.553841  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:49.580182  293728 cri.go:89] found id: ""
	I1206 10:08:49.580205  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.580214  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:49.580220  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:49.580285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:49.609009  293728 cri.go:89] found id: ""
	I1206 10:08:49.609034  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.609043  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:49.609050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:49.609114  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:49.634196  293728 cri.go:89] found id: ""
	I1206 10:08:49.634218  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.634227  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:49.634233  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:49.634293  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:49.660015  293728 cri.go:89] found id: ""
	I1206 10:08:49.660038  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.660047  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:49.660053  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:49.660115  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:49.685329  293728 cri.go:89] found id: ""
	I1206 10:08:49.685355  293728 logs.go:282] 0 containers: []
	W1206 10:08:49.685364  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:49.685373  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:49.685385  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:49.699189  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:49.699218  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:49.768229  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:49.760011    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.760509    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762154    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.762619    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:49.764026    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:49.768253  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:49.768267  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:49.794221  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:49.794255  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:49.825320  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:49.825349  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:08:46.222859  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:48.223148  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:50.223492  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:52.381962  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:52.392897  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:52.392974  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:52.421172  293728 cri.go:89] found id: ""
	I1206 10:08:52.421197  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.421206  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:52.421212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:52.421276  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:52.449281  293728 cri.go:89] found id: ""
	I1206 10:08:52.449305  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.449313  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:52.449320  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:52.449378  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:52.474517  293728 cri.go:89] found id: ""
	I1206 10:08:52.474539  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.474547  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:52.474553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:52.474616  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:52.500435  293728 cri.go:89] found id: ""
	I1206 10:08:52.500458  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.500466  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:52.500473  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:52.500532  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:52.526935  293728 cri.go:89] found id: ""
	I1206 10:08:52.526957  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.526965  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:52.526972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:52.527031  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:52.553625  293728 cri.go:89] found id: ""
	I1206 10:08:52.553646  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.553654  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:52.553663  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:52.553721  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:52.580092  293728 cri.go:89] found id: ""
	I1206 10:08:52.580169  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.580194  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:52.580206  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:52.580269  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:52.609595  293728 cri.go:89] found id: ""
	I1206 10:08:52.609622  293728 logs.go:282] 0 containers: []
	W1206 10:08:52.609631  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:52.609640  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:52.609658  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:52.666423  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:52.666460  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:52.680542  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:52.680572  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:52.745123  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:52.737007    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.737635    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739181    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.739662    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:52.741168    4977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:52.745142  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:52.745154  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:52.771578  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:52.771612  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 10:08:52.722479  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:54.722588  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 10:08:57.222560  287962 node_ready.go:55] error getting node "no-preload-257359" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-257359": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 10:08:58.722277  287962 node_ready.go:38] duration metric: took 6m0.000230261s for node "no-preload-257359" to be "Ready" ...
	I1206 10:08:58.725649  287962 out.go:203] 
	W1206 10:08:58.728547  287962 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:08:58.728572  287962 out.go:285] * 
	W1206 10:08:58.730704  287962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:08:58.733695  287962 out.go:203] 
	I1206 10:08:55.300596  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:55.311733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:55.311837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:55.337436  293728 cri.go:89] found id: ""
	I1206 10:08:55.337466  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.337475  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:55.337482  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:55.337557  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:55.362426  293728 cri.go:89] found id: ""
	I1206 10:08:55.362449  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.362457  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:55.362462  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:55.362539  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:55.388462  293728 cri.go:89] found id: ""
	I1206 10:08:55.388488  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.388497  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:55.388503  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:55.388567  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:55.417368  293728 cri.go:89] found id: ""
	I1206 10:08:55.417391  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.417400  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:55.417406  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:55.417465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:55.444014  293728 cri.go:89] found id: ""
	I1206 10:08:55.444052  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.444061  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:55.444067  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:55.444126  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:55.473384  293728 cri.go:89] found id: ""
	I1206 10:08:55.473408  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.473417  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:55.473423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:55.473485  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:55.499095  293728 cri.go:89] found id: ""
	I1206 10:08:55.499119  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.499128  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:55.499134  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:55.499193  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:55.530488  293728 cri.go:89] found id: ""
	I1206 10:08:55.530560  293728 logs.go:282] 0 containers: []
	W1206 10:08:55.530585  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:55.530607  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:55.530642  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:55.543996  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:55.544023  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:55.609232  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:55.600433    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.601179    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.602847    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.603477    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.605074    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:55.600433    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.601179    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.602847    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.603477    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:55.605074    5090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:55.609295  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:55.609315  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:55.635259  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:55.635292  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:08:55.663234  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:55.663263  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:58.219942  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:08:58.240184  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:08:58.240251  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:08:58.288171  293728 cri.go:89] found id: ""
	I1206 10:08:58.288193  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.288201  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:08:58.288208  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:08:58.288267  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:08:58.326999  293728 cri.go:89] found id: ""
	I1206 10:08:58.327020  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.327029  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:08:58.327035  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:08:58.327104  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:08:58.354289  293728 cri.go:89] found id: ""
	I1206 10:08:58.354316  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.354325  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:08:58.354331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:08:58.354392  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:08:58.378166  293728 cri.go:89] found id: ""
	I1206 10:08:58.378195  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.378204  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:08:58.378210  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:08:58.378270  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:08:58.405700  293728 cri.go:89] found id: ""
	I1206 10:08:58.405721  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.405734  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:08:58.405740  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:08:58.405800  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:08:58.430772  293728 cri.go:89] found id: ""
	I1206 10:08:58.430800  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.430809  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:08:58.430816  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:08:58.430882  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:08:58.455749  293728 cri.go:89] found id: ""
	I1206 10:08:58.455777  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.455787  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:08:58.455793  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:08:58.455854  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:08:58.480448  293728 cri.go:89] found id: ""
	I1206 10:08:58.480491  293728 logs.go:282] 0 containers: []
	W1206 10:08:58.480502  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:08:58.480512  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:08:58.480527  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:08:58.536659  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:08:58.536697  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:08:58.550566  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:08:58.550589  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:08:58.618059  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:08:58.608926    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.609448    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.611304    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.612003    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.613723    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:08:58.608926    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.609448    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.611304    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.612003    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:08:58.613723    5203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:08:58.618081  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:08:58.618093  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:08:58.643111  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:08:58.643142  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:01.172811  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:01.189894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:01.189970  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:01.216506  293728 cri.go:89] found id: ""
	I1206 10:09:01.216533  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.216542  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:01.216549  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:01.216610  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:01.248643  293728 cri.go:89] found id: ""
	I1206 10:09:01.248667  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.248675  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:01.248681  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:01.248754  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:01.282778  293728 cri.go:89] found id: ""
	I1206 10:09:01.282799  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.282808  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:01.282814  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:01.282874  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:01.317892  293728 cri.go:89] found id: ""
	I1206 10:09:01.317914  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.317923  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:01.317929  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:01.317996  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:01.344569  293728 cri.go:89] found id: ""
	I1206 10:09:01.344596  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.344606  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:01.344612  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:01.344675  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:01.374785  293728 cri.go:89] found id: ""
	I1206 10:09:01.374812  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.374822  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:01.374829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:01.374913  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:01.399962  293728 cri.go:89] found id: ""
	I1206 10:09:01.399986  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.399995  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:01.400001  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:01.400120  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:01.426824  293728 cri.go:89] found id: ""
	I1206 10:09:01.426850  293728 logs.go:282] 0 containers: []
	W1206 10:09:01.426859  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:01.426877  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:01.426904  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:01.484968  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:01.485001  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:01.506470  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:01.506550  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:01.586157  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:01.577286    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.578043    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.579813    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.580524    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.582153    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:01.577286    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.578043    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.579813    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.580524    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:01.582153    5314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:01.586226  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:01.586241  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:01.616859  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:01.617050  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:04.147855  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:04.161529  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:04.161601  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:04.185793  293728 cri.go:89] found id: ""
	I1206 10:09:04.185817  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.185826  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:04.185832  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:04.185893  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:04.213785  293728 cri.go:89] found id: ""
	I1206 10:09:04.213809  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.213818  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:04.213824  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:04.213886  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:04.245746  293728 cri.go:89] found id: ""
	I1206 10:09:04.245769  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.245778  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:04.245784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:04.245844  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:04.276836  293728 cri.go:89] found id: ""
	I1206 10:09:04.276864  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.276873  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:04.276879  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:04.276949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:04.307027  293728 cri.go:89] found id: ""
	I1206 10:09:04.307054  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.307089  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:04.307096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:04.307171  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:04.332480  293728 cri.go:89] found id: ""
	I1206 10:09:04.332503  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.332511  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:04.332518  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:04.332580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:04.359083  293728 cri.go:89] found id: ""
	I1206 10:09:04.359105  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.359113  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:04.359119  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:04.359178  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:04.384459  293728 cri.go:89] found id: ""
	I1206 10:09:04.384527  293728 logs.go:282] 0 containers: []
	W1206 10:09:04.384560  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:04.384576  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:04.384589  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:04.398476  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:04.398508  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:04.464529  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:04.455141    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.455968    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.457782    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.458361    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.459895    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:04.455141    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.455968    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.457782    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.458361    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:04.459895    5422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:04.464551  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:04.464564  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:04.493800  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:04.493842  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:04.533422  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:04.533455  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:07.095340  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:07.106226  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:07.106321  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:07.133785  293728 cri.go:89] found id: ""
	I1206 10:09:07.133849  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.133886  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:07.133907  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:07.133972  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:07.169905  293728 cri.go:89] found id: ""
	I1206 10:09:07.169932  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.169957  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:07.169964  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:07.170039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:07.198212  293728 cri.go:89] found id: ""
	I1206 10:09:07.198285  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.198309  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:07.198329  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:07.198499  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:07.236730  293728 cri.go:89] found id: ""
	I1206 10:09:07.236809  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.236842  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:07.236862  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:07.236969  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:07.264908  293728 cri.go:89] found id: ""
	I1206 10:09:07.264984  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.265015  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:07.265037  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:07.265147  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:07.293030  293728 cri.go:89] found id: ""
	I1206 10:09:07.293102  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.293125  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:07.293146  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:07.293253  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:07.320479  293728 cri.go:89] found id: ""
	I1206 10:09:07.320542  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.320572  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:07.320600  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:07.320712  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:07.346369  293728 cri.go:89] found id: ""
	I1206 10:09:07.346431  293728 logs.go:282] 0 containers: []
	W1206 10:09:07.346461  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:07.346486  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:07.346524  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:07.375165  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:07.375244  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:07.433189  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:07.433225  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:07.447472  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:07.447500  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:07.536150  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:07.524233    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.525315    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527184    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527855    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.532128    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:07.524233    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.525315    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527184    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.527855    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:07.532128    5548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:07.536173  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:07.536186  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:10.062333  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:10.073694  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:10.073767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:10.101307  293728 cri.go:89] found id: ""
	I1206 10:09:10.101330  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.101339  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:10.101346  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:10.101413  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:10.128394  293728 cri.go:89] found id: ""
	I1206 10:09:10.128420  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.128428  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:10.128436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:10.128497  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:10.154510  293728 cri.go:89] found id: ""
	I1206 10:09:10.154536  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.154545  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:10.154552  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:10.154611  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:10.179782  293728 cri.go:89] found id: ""
	I1206 10:09:10.179808  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.179816  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:10.179822  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:10.179888  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:10.210072  293728 cri.go:89] found id: ""
	I1206 10:09:10.210142  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.210171  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:10.210201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:10.210315  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:10.245657  293728 cri.go:89] found id: ""
	I1206 10:09:10.245676  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.245684  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:10.245691  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:10.245748  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:10.282232  293728 cri.go:89] found id: ""
	I1206 10:09:10.282305  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.282345  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:10.282365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:10.282454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:10.313160  293728 cri.go:89] found id: ""
	I1206 10:09:10.313225  293728 logs.go:282] 0 containers: []
	W1206 10:09:10.313239  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:10.313249  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:10.313261  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:10.373196  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:10.373230  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:10.386792  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:10.386819  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:10.450525  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:10.442280    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.442968    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.444545    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.445040    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.446664    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:10.442280    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.442968    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.444545    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.445040    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:10.446664    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:10.450547  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:10.450560  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:10.476832  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:10.476869  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:13.012652  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:13.023659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:13.023732  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:13.047365  293728 cri.go:89] found id: ""
	I1206 10:09:13.047458  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.047473  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:13.047480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:13.047541  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:13.072937  293728 cri.go:89] found id: ""
	I1206 10:09:13.072961  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.072970  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:13.072987  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:13.073048  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:13.097439  293728 cri.go:89] found id: ""
	I1206 10:09:13.097515  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.097531  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:13.097539  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:13.097600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:13.123273  293728 cri.go:89] found id: ""
	I1206 10:09:13.123307  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.123316  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:13.123323  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:13.123426  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:13.149441  293728 cri.go:89] found id: ""
	I1206 10:09:13.149518  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.149534  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:13.149542  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:13.149608  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:13.174275  293728 cri.go:89] found id: ""
	I1206 10:09:13.174298  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.174306  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:13.174313  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:13.174379  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:13.203852  293728 cri.go:89] found id: ""
	I1206 10:09:13.203926  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.203942  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:13.203951  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:13.204013  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:13.237842  293728 cri.go:89] found id: ""
	I1206 10:09:13.237866  293728 logs.go:282] 0 containers: []
	W1206 10:09:13.237875  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:13.237884  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:13.237899  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:13.305042  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:13.305078  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:13.319151  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:13.319178  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:13.383092  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:13.374391    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.375129    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.376927    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.377619    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.379235    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:13.374391    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.375129    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.376927    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.377619    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:13.379235    5762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:13.383112  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:13.383123  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:13.409266  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:13.409295  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:15.937340  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:15.948165  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:15.948296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:15.973427  293728 cri.go:89] found id: ""
	I1206 10:09:15.973452  293728 logs.go:282] 0 containers: []
	W1206 10:09:15.973461  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:15.973467  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:15.973529  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:16.006761  293728 cri.go:89] found id: ""
	I1206 10:09:16.006806  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.006816  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:16.006824  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:16.006907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:16.034447  293728 cri.go:89] found id: ""
	I1206 10:09:16.034483  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.034492  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:16.034499  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:16.034572  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:16.060884  293728 cri.go:89] found id: ""
	I1206 10:09:16.060955  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.060972  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:16.060979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:16.061039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:16.090437  293728 cri.go:89] found id: ""
	I1206 10:09:16.090461  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.090470  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:16.090476  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:16.090548  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:16.118175  293728 cri.go:89] found id: ""
	I1206 10:09:16.118201  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.118209  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:16.118222  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:16.118342  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:16.144978  293728 cri.go:89] found id: ""
	I1206 10:09:16.145005  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.145015  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:16.145021  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:16.145083  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:16.169350  293728 cri.go:89] found id: ""
	I1206 10:09:16.169378  293728 logs.go:282] 0 containers: []
	W1206 10:09:16.169392  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:16.169401  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:16.169412  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:16.228680  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:16.228755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:16.243103  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:16.243179  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:16.316618  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:16.307974    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.308682    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310238    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310832    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.312513    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:16.307974    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.308682    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310238    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.310832    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:16.312513    5878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:16.316645  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:16.316658  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:16.342620  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:16.342651  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:18.872579  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:18.883111  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:18.883184  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:18.909365  293728 cri.go:89] found id: ""
	I1206 10:09:18.909393  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.909402  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:18.909410  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:18.909480  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:18.933714  293728 cri.go:89] found id: ""
	I1206 10:09:18.933737  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.933746  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:18.933752  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:18.933811  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:18.963141  293728 cri.go:89] found id: ""
	I1206 10:09:18.963206  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.963228  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:18.963245  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:18.963333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:18.988486  293728 cri.go:89] found id: ""
	I1206 10:09:18.988511  293728 logs.go:282] 0 containers: []
	W1206 10:09:18.988519  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:18.988526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:18.988604  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:19.020422  293728 cri.go:89] found id: ""
	I1206 10:09:19.020448  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.020456  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:19.020463  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:19.020543  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:19.045103  293728 cri.go:89] found id: ""
	I1206 10:09:19.045164  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.045179  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:19.045186  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:19.045245  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:19.069289  293728 cri.go:89] found id: ""
	I1206 10:09:19.069322  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.069331  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:19.069337  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:19.069403  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:19.094504  293728 cri.go:89] found id: ""
	I1206 10:09:19.094539  293728 logs.go:282] 0 containers: []
	W1206 10:09:19.094547  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:19.094557  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:19.094569  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:19.108440  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:19.108469  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:19.175508  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:19.166822    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.167472    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169260    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169788    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.171507    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:19.166822    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.167472    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169260    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.169788    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:19.171507    5984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:19.175529  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:19.175542  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:19.201390  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:19.201424  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:19.243342  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:19.243364  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:21.808230  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:21.818876  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:21.818955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:21.848634  293728 cri.go:89] found id: ""
	I1206 10:09:21.848655  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.848663  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:21.848669  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:21.848728  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:21.872798  293728 cri.go:89] found id: ""
	I1206 10:09:21.872861  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.872875  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:21.872882  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:21.872938  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:21.900148  293728 cri.go:89] found id: ""
	I1206 10:09:21.900174  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.900183  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:21.900190  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:21.900250  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:21.924786  293728 cri.go:89] found id: ""
	I1206 10:09:21.924813  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.924822  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:21.924829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:21.924915  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:21.954178  293728 cri.go:89] found id: ""
	I1206 10:09:21.954212  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.954221  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:21.954227  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:21.954296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:21.979818  293728 cri.go:89] found id: ""
	I1206 10:09:21.979842  293728 logs.go:282] 0 containers: []
	W1206 10:09:21.979850  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:21.979857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:21.979916  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:22.008409  293728 cri.go:89] found id: ""
	I1206 10:09:22.008435  293728 logs.go:282] 0 containers: []
	W1206 10:09:22.008445  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:22.008452  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:22.008527  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:22.035338  293728 cri.go:89] found id: ""
	I1206 10:09:22.035363  293728 logs.go:282] 0 containers: []
	W1206 10:09:22.035396  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:22.035407  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:22.035418  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:22.091435  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:22.091472  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:22.105532  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:22.105567  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:22.171773  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:22.163104    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.163868    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.165557    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.166181    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.167828    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:22.163104    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.163868    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.165557    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.166181    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:22.167828    6101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:22.171793  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:22.171806  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:22.197667  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:22.197706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:24.735529  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:24.748375  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:24.748558  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:24.788906  293728 cri.go:89] found id: ""
	I1206 10:09:24.788978  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.789002  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:24.789024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:24.789113  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:24.818364  293728 cri.go:89] found id: ""
	I1206 10:09:24.818431  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.818453  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:24.818472  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:24.818564  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:24.845760  293728 cri.go:89] found id: ""
	I1206 10:09:24.845802  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.845811  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:24.845817  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:24.845889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:24.872973  293728 cri.go:89] found id: ""
	I1206 10:09:24.872997  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.873006  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:24.873012  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:24.873076  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:24.902758  293728 cri.go:89] found id: ""
	I1206 10:09:24.902791  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.902801  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:24.902809  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:24.902885  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:24.929539  293728 cri.go:89] found id: ""
	I1206 10:09:24.929565  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.929575  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:24.929582  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:24.929665  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:24.955731  293728 cri.go:89] found id: ""
	I1206 10:09:24.955806  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.955822  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:24.955829  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:24.955891  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:24.980673  293728 cri.go:89] found id: ""
	I1206 10:09:24.980704  293728 logs.go:282] 0 containers: []
	W1206 10:09:24.980713  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:24.980722  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:24.980734  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:25.017868  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:25.017899  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:25.077472  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:25.077510  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:25.093107  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:25.093139  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:25.164597  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:25.155645    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.156390    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158149    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158952    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.160572    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:25.155645    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.156390    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158149    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.158952    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:25.160572    6228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:25.164635  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:25.164649  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:27.694118  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:27.704932  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:27.705013  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:27.734684  293728 cri.go:89] found id: ""
	I1206 10:09:27.734762  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.734784  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:27.734802  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:27.734892  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:27.771355  293728 cri.go:89] found id: ""
	I1206 10:09:27.771442  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.771466  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:27.771485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:27.771568  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:27.800742  293728 cri.go:89] found id: ""
	I1206 10:09:27.800818  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.800836  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:27.800844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:27.800907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:27.827029  293728 cri.go:89] found id: ""
	I1206 10:09:27.827058  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.827068  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:27.827075  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:27.827136  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:27.853299  293728 cri.go:89] found id: ""
	I1206 10:09:27.853323  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.853332  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:27.853339  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:27.853431  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:27.878371  293728 cri.go:89] found id: ""
	I1206 10:09:27.878394  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.878402  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:27.878415  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:27.878525  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:27.903247  293728 cri.go:89] found id: ""
	I1206 10:09:27.903269  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.903277  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:27.903283  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:27.903405  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:27.927665  293728 cri.go:89] found id: ""
	I1206 10:09:27.927687  293728 logs.go:282] 0 containers: []
	W1206 10:09:27.927695  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:27.927703  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:27.927714  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:27.993787  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:27.984910    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.985739    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.987460    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.988125    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.989907    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:27.984910    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.985739    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.987460    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.988125    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:27.989907    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:27.993808  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:27.993820  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:28.021097  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:28.021132  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:28.050410  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:28.050438  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:28.108602  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:28.108636  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:30.622836  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:30.633282  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:30.633354  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:30.657827  293728 cri.go:89] found id: ""
	I1206 10:09:30.657850  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.657859  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:30.657865  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:30.657929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:30.685495  293728 cri.go:89] found id: ""
	I1206 10:09:30.685525  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.685534  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:30.685541  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:30.685611  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:30.710543  293728 cri.go:89] found id: ""
	I1206 10:09:30.710576  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.710585  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:30.710591  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:30.710661  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:30.738572  293728 cri.go:89] found id: ""
	I1206 10:09:30.738667  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.738690  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:30.738710  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:30.738815  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:30.782603  293728 cri.go:89] found id: ""
	I1206 10:09:30.782684  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.782706  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:30.782725  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:30.782829  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:30.810264  293728 cri.go:89] found id: ""
	I1206 10:09:30.810342  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.810364  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:30.810387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:30.810479  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:30.835864  293728 cri.go:89] found id: ""
	I1206 10:09:30.835944  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.835960  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:30.835968  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:30.836050  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:30.860832  293728 cri.go:89] found id: ""
	I1206 10:09:30.860858  293728 logs.go:282] 0 containers: []
	W1206 10:09:30.860867  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:30.860876  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:30.860887  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:30.917397  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:30.917433  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:30.931490  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:30.931572  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:31.004606  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:30.993339    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.994064    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.995768    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.996292    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.997903    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:30.993339    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.994064    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.995768    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.996292    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:30.997903    6442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:31.004692  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:31.004725  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:31.033130  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:31.033168  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:33.563282  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:33.574558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:33.574631  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:33.600752  293728 cri.go:89] found id: ""
	I1206 10:09:33.600784  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.600797  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:33.600804  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:33.600876  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:33.626879  293728 cri.go:89] found id: ""
	I1206 10:09:33.626909  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.626919  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:33.626925  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:33.626987  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:33.652921  293728 cri.go:89] found id: ""
	I1206 10:09:33.652945  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.652954  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:33.652960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:33.653025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:33.678584  293728 cri.go:89] found id: ""
	I1206 10:09:33.678619  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.678627  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:33.678634  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:33.678704  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:33.706401  293728 cri.go:89] found id: ""
	I1206 10:09:33.706424  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.706433  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:33.706439  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:33.706514  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:33.754300  293728 cri.go:89] found id: ""
	I1206 10:09:33.754326  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.754334  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:33.754341  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:33.754410  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:33.782351  293728 cri.go:89] found id: ""
	I1206 10:09:33.782388  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.782397  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:33.782410  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:33.782479  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:33.809362  293728 cri.go:89] found id: ""
	I1206 10:09:33.809399  293728 logs.go:282] 0 containers: []
	W1206 10:09:33.809407  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:33.809417  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:33.809428  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:33.845485  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:33.845510  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:33.902066  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:33.902106  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:33.915843  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:33.915871  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:33.983566  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:33.974999    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.975872    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977595    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977932    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.979555    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:33.974999    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.975872    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977595    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.977932    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:33.979555    6564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:33.983589  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:33.983610  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:36.512857  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:36.524687  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:36.524752  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:36.559537  293728 cri.go:89] found id: ""
	I1206 10:09:36.559559  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.559568  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:36.559574  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:36.559641  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:36.584964  293728 cri.go:89] found id: ""
	I1206 10:09:36.585033  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.585049  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:36.585056  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:36.585124  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:36.610724  293728 cri.go:89] found id: ""
	I1206 10:09:36.610750  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.610759  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:36.610765  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:36.610824  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:36.641090  293728 cri.go:89] found id: ""
	I1206 10:09:36.641158  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.641185  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:36.641198  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:36.641287  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:36.665900  293728 cri.go:89] found id: ""
	I1206 10:09:36.665926  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.665935  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:36.665941  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:36.666004  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:36.693620  293728 cri.go:89] found id: ""
	I1206 10:09:36.693650  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.693659  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:36.693666  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:36.693731  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:36.734543  293728 cri.go:89] found id: ""
	I1206 10:09:36.734621  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.734646  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:36.734665  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:36.734757  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:36.776081  293728 cri.go:89] found id: ""
	I1206 10:09:36.776146  293728 logs.go:282] 0 containers: []
	W1206 10:09:36.776168  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:36.776188  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:36.776226  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:36.792679  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:36.792711  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:36.861792  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:36.852688    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.853295    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855348    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855858    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.857501    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:36.852688    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.853295    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855348    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.855858    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:36.857501    6664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:36.861815  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:36.861828  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:36.887686  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:36.887722  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:36.915203  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:36.915229  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:39.473166  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:39.484986  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:39.485070  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:39.519022  293728 cri.go:89] found id: ""
	I1206 10:09:39.519084  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.519097  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:39.519105  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:39.519183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:39.550949  293728 cri.go:89] found id: ""
	I1206 10:09:39.550987  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.551002  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:39.551009  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:39.551083  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:39.576090  293728 cri.go:89] found id: ""
	I1206 10:09:39.576120  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.576129  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:39.576136  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:39.576199  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:39.602338  293728 cri.go:89] found id: ""
	I1206 10:09:39.602364  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.602374  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:39.602386  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:39.602447  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:39.627803  293728 cri.go:89] found id: ""
	I1206 10:09:39.627841  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.627850  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:39.627857  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:39.627929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:39.653348  293728 cri.go:89] found id: ""
	I1206 10:09:39.653376  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.653385  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:39.653392  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:39.653454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:39.679324  293728 cri.go:89] found id: ""
	I1206 10:09:39.679418  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.679434  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:39.679442  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:39.679515  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:39.704684  293728 cri.go:89] found id: ""
	I1206 10:09:39.704708  293728 logs.go:282] 0 containers: []
	W1206 10:09:39.704717  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:39.704726  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:39.704738  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:39.764873  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:39.764905  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:39.779533  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:39.779558  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:39.852807  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:39.844502    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.845176    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.846778    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.847166    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.848722    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:39.844502    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.845176    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.846778    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.847166    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:39.848722    6779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:39.852829  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:39.852842  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:39.879753  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:39.879787  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:42.409609  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:42.421328  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:42.421397  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:42.447308  293728 cri.go:89] found id: ""
	I1206 10:09:42.447333  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.447342  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:42.447349  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:42.447440  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:42.481946  293728 cri.go:89] found id: ""
	I1206 10:09:42.481977  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.481985  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:42.481992  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:42.482055  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:42.514307  293728 cri.go:89] found id: ""
	I1206 10:09:42.514378  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.514401  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:42.514420  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:42.514512  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:42.546780  293728 cri.go:89] found id: ""
	I1206 10:09:42.546806  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.546815  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:42.546822  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:42.546891  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:42.573407  293728 cri.go:89] found id: ""
	I1206 10:09:42.573430  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.573439  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:42.573445  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:42.573501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:42.599133  293728 cri.go:89] found id: ""
	I1206 10:09:42.599156  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.599164  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:42.599171  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:42.599233  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:42.625000  293728 cri.go:89] found id: ""
	I1206 10:09:42.625028  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.625037  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:42.625043  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:42.625107  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:42.654408  293728 cri.go:89] found id: ""
	I1206 10:09:42.654436  293728 logs.go:282] 0 containers: []
	W1206 10:09:42.654446  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:42.654455  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:42.654467  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:42.711699  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:42.711733  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:42.727806  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:42.727881  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:42.811421  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:42.801418    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.803078    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.804330    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.805386    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.807056    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:42.801418    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.803078    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.804330    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.805386    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:42.807056    6893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:42.811446  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:42.811460  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:42.838410  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:42.838445  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:45.369084  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:45.380279  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:45.380388  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:45.405587  293728 cri.go:89] found id: ""
	I1206 10:09:45.405612  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.405621  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:45.405628  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:45.405688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:45.433060  293728 cri.go:89] found id: ""
	I1206 10:09:45.433088  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.433097  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:45.433103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:45.433164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:45.460740  293728 cri.go:89] found id: ""
	I1206 10:09:45.460763  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.460772  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:45.460778  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:45.460837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:45.497706  293728 cri.go:89] found id: ""
	I1206 10:09:45.497771  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.497793  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:45.497813  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:45.497904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:45.534656  293728 cri.go:89] found id: ""
	I1206 10:09:45.534681  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.534690  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:45.534696  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:45.534770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:45.564269  293728 cri.go:89] found id: ""
	I1206 10:09:45.564350  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.564372  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:45.564387  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:45.564474  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:45.588438  293728 cri.go:89] found id: ""
	I1206 10:09:45.588517  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.588539  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:45.588558  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:45.588651  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:45.613920  293728 cri.go:89] found id: ""
	I1206 10:09:45.613951  293728 logs.go:282] 0 containers: []
	W1206 10:09:45.613960  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:45.613970  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:45.613980  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:45.641788  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:45.641863  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:45.699089  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:45.699123  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:45.712662  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:45.712734  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:45.793739  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:45.785473    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.786020    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.787671    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.788175    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.789766    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:45.785473    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.786020    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.787671    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.788175    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:45.789766    7013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:45.793759  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:45.793773  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:48.320858  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:48.331937  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:48.332070  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:48.356716  293728 cri.go:89] found id: ""
	I1206 10:09:48.356784  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.356798  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:48.356806  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:48.356866  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:48.382138  293728 cri.go:89] found id: ""
	I1206 10:09:48.382172  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.382181  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:48.382188  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:48.382258  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:48.408214  293728 cri.go:89] found id: ""
	I1206 10:09:48.408238  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.408247  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:48.408253  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:48.408313  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:48.433328  293728 cri.go:89] found id: ""
	I1206 10:09:48.433351  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.433360  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:48.433366  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:48.433428  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:48.460263  293728 cri.go:89] found id: ""
	I1206 10:09:48.460284  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.460292  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:48.460298  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:48.460355  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:48.488344  293728 cri.go:89] found id: ""
	I1206 10:09:48.488373  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.488381  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:48.488388  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:48.488452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:48.521629  293728 cri.go:89] found id: ""
	I1206 10:09:48.521658  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.521666  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:48.521673  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:48.521759  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:48.549255  293728 cri.go:89] found id: ""
	I1206 10:09:48.549321  293728 logs.go:282] 0 containers: []
	W1206 10:09:48.549344  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:48.549365  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:48.549392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:48.609413  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:48.609450  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:48.623661  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:48.623688  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:48.693637  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:48.684667    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.685431    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687132    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687585    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.689240    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:48.684667    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.685431    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687132    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.687585    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:48.689240    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:48.693661  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:48.693674  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:48.719587  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:48.719660  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:51.258260  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:51.268785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:51.268856  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:51.295768  293728 cri.go:89] found id: ""
	I1206 10:09:51.295793  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.295801  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:51.295808  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:51.295879  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:51.321853  293728 cri.go:89] found id: ""
	I1206 10:09:51.321886  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.321894  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:51.321900  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:51.321968  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:51.347472  293728 cri.go:89] found id: ""
	I1206 10:09:51.347494  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.347502  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:51.347517  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:51.347575  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:51.371656  293728 cri.go:89] found id: ""
	I1206 10:09:51.371683  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.371692  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:51.371698  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:51.371758  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:51.397262  293728 cri.go:89] found id: ""
	I1206 10:09:51.397289  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.397298  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:51.397305  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:51.397409  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:51.423015  293728 cri.go:89] found id: ""
	I1206 10:09:51.423045  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.423061  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:51.423076  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:51.423149  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:51.454355  293728 cri.go:89] found id: ""
	I1206 10:09:51.454381  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.454390  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:51.454396  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:51.454463  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:51.486768  293728 cri.go:89] found id: ""
	I1206 10:09:51.486808  293728 logs.go:282] 0 containers: []
	W1206 10:09:51.486823  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:51.486832  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:51.486843  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:51.554153  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:51.554192  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:51.568560  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:51.568590  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:51.634642  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:51.626552    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.627100    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.628640    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.629103    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.630610    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:51.626552    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.627100    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.628640    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.629103    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:51.630610    7226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:51.634664  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:51.634678  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:51.660429  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:51.660463  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:54.188738  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:54.201905  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:54.201981  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:54.227986  293728 cri.go:89] found id: ""
	I1206 10:09:54.228012  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.228021  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:54.228028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:54.228113  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:54.254201  293728 cri.go:89] found id: ""
	I1206 10:09:54.254235  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.254245  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:54.254283  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:54.254395  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:54.278782  293728 cri.go:89] found id: ""
	I1206 10:09:54.278820  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.278830  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:54.278852  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:54.278935  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:54.303206  293728 cri.go:89] found id: ""
	I1206 10:09:54.303240  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.303249  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:54.303256  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:54.303323  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:54.328700  293728 cri.go:89] found id: ""
	I1206 10:09:54.328726  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.328735  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:54.328741  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:54.328818  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:54.352531  293728 cri.go:89] found id: ""
	I1206 10:09:54.352613  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.352638  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:54.352656  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:54.352746  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:54.381751  293728 cri.go:89] found id: ""
	I1206 10:09:54.381785  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.381795  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:54.381802  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:54.381873  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:54.410917  293728 cri.go:89] found id: ""
	I1206 10:09:54.410993  293728 logs.go:282] 0 containers: []
	W1206 10:09:54.411015  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:54.411037  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:54.411076  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:54.440257  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:54.440285  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:09:54.500235  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:54.500278  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:54.515938  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:54.515966  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:54.588801  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:54.579599    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.580550    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582125    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582602    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.584281    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:54.579599    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.580550    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582125    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.582602    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:54.584281    7350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:54.588823  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:54.588836  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:57.116312  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:09:57.127033  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:09:57.127111  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:09:57.152251  293728 cri.go:89] found id: ""
	I1206 10:09:57.152273  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.152282  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:09:57.152288  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:09:57.152346  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:09:57.176684  293728 cri.go:89] found id: ""
	I1206 10:09:57.176758  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.176773  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:09:57.176781  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:09:57.176840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:09:57.202374  293728 cri.go:89] found id: ""
	I1206 10:09:57.202436  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.202470  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:09:57.202494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:09:57.202580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:09:57.227547  293728 cri.go:89] found id: ""
	I1206 10:09:57.227573  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.227582  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:09:57.227589  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:09:57.227650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:09:57.253673  293728 cri.go:89] found id: ""
	I1206 10:09:57.253705  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.253714  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:09:57.253721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:09:57.253789  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:09:57.278618  293728 cri.go:89] found id: ""
	I1206 10:09:57.278644  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.278654  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:09:57.278660  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:09:57.278722  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:09:57.304336  293728 cri.go:89] found id: ""
	I1206 10:09:57.304384  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.304397  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:09:57.304423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:09:57.304508  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:09:57.334469  293728 cri.go:89] found id: ""
	I1206 10:09:57.334492  293728 logs.go:282] 0 containers: []
	W1206 10:09:57.334500  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:09:57.334508  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:09:57.334520  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:09:57.348891  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:09:57.348922  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:09:57.415906  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:09:57.407558    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.408081    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.409719    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.410287    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.411964    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:09:57.407558    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.408081    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.409719    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.410287    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:09:57.411964    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:09:57.415927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:09:57.415939  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:09:57.441880  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:09:57.441918  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:09:57.475269  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:09:57.475297  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:00.036981  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:00.091003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:00.091183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:00.199598  293728 cri.go:89] found id: ""
	I1206 10:10:00.199642  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.199652  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:00.199660  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:00.199761  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:00.291513  293728 cri.go:89] found id: ""
	I1206 10:10:00.291550  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.291562  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:00.291569  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:00.291653  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:00.363428  293728 cri.go:89] found id: ""
	I1206 10:10:00.363514  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.363541  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:00.363559  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:00.363706  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:00.471969  293728 cri.go:89] found id: ""
	I1206 10:10:00.471994  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.472004  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:00.472013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:00.472080  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:00.548937  293728 cri.go:89] found id: ""
	I1206 10:10:00.548960  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.548969  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:00.548976  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:00.549039  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:00.612750  293728 cri.go:89] found id: ""
	I1206 10:10:00.612774  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.612783  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:00.612790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:00.612857  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:00.648024  293728 cri.go:89] found id: ""
	I1206 10:10:00.648051  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.648061  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:00.648068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:00.648145  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:00.678506  293728 cri.go:89] found id: ""
	I1206 10:10:00.678587  293728 logs.go:282] 0 containers: []
	W1206 10:10:00.678615  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:00.678636  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:00.678671  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:00.755139  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:00.755237  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:00.771588  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:00.771629  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:00.849622  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:00.840203    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.840934    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.842739    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.843443    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.845027    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:00.840203    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.840934    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.842739    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.843443    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:00.845027    7563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:00.849656  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:00.849669  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:00.876546  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:00.876583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:03.409148  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:03.420472  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:03.420547  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:03.449464  293728 cri.go:89] found id: ""
	I1206 10:10:03.449487  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.449496  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:03.449521  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:03.449598  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:03.482241  293728 cri.go:89] found id: ""
	I1206 10:10:03.482267  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.482276  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:03.482286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:03.482349  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:03.512048  293728 cri.go:89] found id: ""
	I1206 10:10:03.512075  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.512084  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:03.512090  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:03.512153  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:03.544039  293728 cri.go:89] found id: ""
	I1206 10:10:03.544064  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.544073  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:03.544080  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:03.544159  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:03.568866  293728 cri.go:89] found id: ""
	I1206 10:10:03.568942  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.568966  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:03.568978  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:03.569071  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:03.595896  293728 cri.go:89] found id: ""
	I1206 10:10:03.595930  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.595940  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:03.595946  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:03.596020  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:03.620834  293728 cri.go:89] found id: ""
	I1206 10:10:03.620863  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.620871  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:03.620878  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:03.620950  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:03.644327  293728 cri.go:89] found id: ""
	I1206 10:10:03.644359  293728 logs.go:282] 0 containers: []
	W1206 10:10:03.644368  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:03.644377  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:03.644392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:03.707856  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:03.699517    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.700161    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.701732    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.702251    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.703903    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:03.699517    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.700161    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.701732    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.702251    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:03.703903    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:03.707879  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:03.707891  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:03.735529  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:03.735562  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:03.767489  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:03.767516  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:03.831889  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:03.831926  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:06.346582  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:06.357845  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:06.357929  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:06.387151  293728 cri.go:89] found id: ""
	I1206 10:10:06.387176  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.387185  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:06.387192  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:06.387256  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:06.413165  293728 cri.go:89] found id: ""
	I1206 10:10:06.413194  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.413203  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:06.413210  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:06.413271  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:06.437677  293728 cri.go:89] found id: ""
	I1206 10:10:06.437701  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.437710  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:06.437716  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:06.437772  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:06.463040  293728 cri.go:89] found id: ""
	I1206 10:10:06.463070  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.463080  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:06.463087  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:06.463150  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:06.494675  293728 cri.go:89] found id: ""
	I1206 10:10:06.494751  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.494774  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:06.494794  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:06.494889  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:06.526246  293728 cri.go:89] found id: ""
	I1206 10:10:06.526316  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.526337  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:06.526357  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:06.526440  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:06.559804  293728 cri.go:89] found id: ""
	I1206 10:10:06.559829  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.559839  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:06.559845  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:06.559907  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:06.589855  293728 cri.go:89] found id: ""
	I1206 10:10:06.589930  293728 logs.go:282] 0 containers: []
	W1206 10:10:06.589964  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:06.590003  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:06.590032  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:06.616596  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:06.616632  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:06.646994  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:06.647021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:06.702957  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:06.702993  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:06.716751  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:06.716778  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:06.798071  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:06.789752    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.790292    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.791805    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.792344    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.793982    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:06.789752    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.790292    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.791805    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.792344    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:06.793982    7799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:09.298347  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:09.308960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:09.309035  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:09.333650  293728 cri.go:89] found id: ""
	I1206 10:10:09.333675  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.333683  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:09.333690  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:09.333767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:09.357861  293728 cri.go:89] found id: ""
	I1206 10:10:09.357885  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.357894  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:09.357900  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:09.358010  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:09.382744  293728 cri.go:89] found id: ""
	I1206 10:10:09.382770  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.382779  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:09.382785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:09.382878  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:09.413180  293728 cri.go:89] found id: ""
	I1206 10:10:09.413259  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.413282  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:09.413295  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:09.413376  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:09.438201  293728 cri.go:89] found id: ""
	I1206 10:10:09.438227  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.438235  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:09.438242  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:09.438300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:09.462981  293728 cri.go:89] found id: ""
	I1206 10:10:09.463058  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.463084  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:09.463103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:09.463199  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:09.489818  293728 cri.go:89] found id: ""
	I1206 10:10:09.489840  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.489849  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:09.489855  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:09.489914  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:09.517662  293728 cri.go:89] found id: ""
	I1206 10:10:09.517689  293728 logs.go:282] 0 containers: []
	W1206 10:10:09.517698  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:09.517707  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:09.517719  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:09.576466  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:09.576502  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:09.590374  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:09.590401  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:09.655862  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:09.646406    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.646998    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.648878    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.649656    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.651513    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:09.646406    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.646998    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.648878    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.649656    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:09.651513    7898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:09.655883  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:09.655895  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:09.681441  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:09.681477  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:12.211127  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:12.222215  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:12.222285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:12.247472  293728 cri.go:89] found id: ""
	I1206 10:10:12.247547  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.247562  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:12.247573  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:12.247633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:12.272505  293728 cri.go:89] found id: ""
	I1206 10:10:12.272533  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.272543  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:12.272550  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:12.272638  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:12.297673  293728 cri.go:89] found id: ""
	I1206 10:10:12.297698  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.297707  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:12.297715  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:12.297830  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:12.322568  293728 cri.go:89] found id: ""
	I1206 10:10:12.322609  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.322618  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:12.322625  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:12.322701  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:12.349304  293728 cri.go:89] found id: ""
	I1206 10:10:12.349331  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.349341  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:12.349347  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:12.349443  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:12.375736  293728 cri.go:89] found id: ""
	I1206 10:10:12.375762  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.375771  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:12.375778  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:12.375840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:12.400942  293728 cri.go:89] found id: ""
	I1206 10:10:12.400966  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.400974  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:12.400981  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:12.401040  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:12.426874  293728 cri.go:89] found id: ""
	I1206 10:10:12.426916  293728 logs.go:282] 0 containers: []
	W1206 10:10:12.426926  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:12.426936  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:12.426948  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:12.484510  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:12.484587  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:12.499107  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:12.499186  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:12.572427  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:12.563920    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.564850    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566425    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566780    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.568265    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:12.563920    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.564850    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566425    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.566780    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:12.568265    8012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:12.572450  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:12.572466  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:12.598814  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:12.598849  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:15.128638  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:15.139805  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:15.139876  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:15.165109  293728 cri.go:89] found id: ""
	I1206 10:10:15.165133  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.165149  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:15.165156  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:15.165219  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:15.196948  293728 cri.go:89] found id: ""
	I1206 10:10:15.196974  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.196982  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:15.196989  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:15.197059  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:15.222058  293728 cri.go:89] found id: ""
	I1206 10:10:15.222082  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.222090  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:15.222096  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:15.222155  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:15.248215  293728 cri.go:89] found id: ""
	I1206 10:10:15.248238  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.248247  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:15.248254  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:15.248312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:15.273082  293728 cri.go:89] found id: ""
	I1206 10:10:15.273104  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.273113  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:15.273120  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:15.273179  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:15.298006  293728 cri.go:89] found id: ""
	I1206 10:10:15.298029  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.298037  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:15.298043  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:15.298101  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:15.322519  293728 cri.go:89] found id: ""
	I1206 10:10:15.322542  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.322550  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:15.322557  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:15.322615  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:15.347746  293728 cri.go:89] found id: ""
	I1206 10:10:15.347770  293728 logs.go:282] 0 containers: []
	W1206 10:10:15.347778  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:15.347786  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:15.347797  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:15.361534  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:15.361561  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:15.427348  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:15.418245    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.419137    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421066    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421690    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.423366    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:15.418245    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.419137    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421066    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.421690    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:15.423366    8119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:15.427370  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:15.427404  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:15.453826  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:15.453864  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:15.487015  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:15.487049  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:18.053317  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:18.064493  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:18.064566  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:18.089748  293728 cri.go:89] found id: ""
	I1206 10:10:18.089773  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.089782  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:18.089789  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:18.089850  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:18.116011  293728 cri.go:89] found id: ""
	I1206 10:10:18.116039  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.116048  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:18.116055  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:18.116116  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:18.146676  293728 cri.go:89] found id: ""
	I1206 10:10:18.146701  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.146710  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:18.146716  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:18.146783  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:18.172596  293728 cri.go:89] found id: ""
	I1206 10:10:18.172621  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.172631  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:18.172643  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:18.172703  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:18.198506  293728 cri.go:89] found id: ""
	I1206 10:10:18.198584  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.198608  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:18.198630  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:18.198747  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:18.230708  293728 cri.go:89] found id: ""
	I1206 10:10:18.230786  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.230812  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:18.230830  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:18.230955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:18.257169  293728 cri.go:89] found id: ""
	I1206 10:10:18.257235  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.257250  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:18.257257  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:18.257317  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:18.285950  293728 cri.go:89] found id: ""
	I1206 10:10:18.285976  293728 logs.go:282] 0 containers: []
	W1206 10:10:18.285985  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:18.285994  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:18.286006  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:18.318446  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:18.318471  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:18.379191  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:18.379227  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:18.393268  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:18.393295  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:18.458997  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:18.449882    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.450796    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452543    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452857    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.454349    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:18.449882    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.450796    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452543    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.452857    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:18.454349    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:18.459023  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:18.459035  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:20.987221  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:20.999561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:20.999633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:21.037750  293728 cri.go:89] found id: ""
	I1206 10:10:21.037771  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.037780  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:21.037786  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:21.037846  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:21.063327  293728 cri.go:89] found id: ""
	I1206 10:10:21.063350  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.063358  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:21.063364  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:21.063448  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:21.088200  293728 cri.go:89] found id: ""
	I1206 10:10:21.088223  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.088231  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:21.088237  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:21.088298  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:21.118025  293728 cri.go:89] found id: ""
	I1206 10:10:21.118051  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.118061  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:21.118068  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:21.118126  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:21.143740  293728 cri.go:89] found id: ""
	I1206 10:10:21.143770  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.143779  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:21.143785  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:21.143848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:21.169323  293728 cri.go:89] found id: ""
	I1206 10:10:21.169401  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.169417  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:21.169424  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:21.169501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:21.194291  293728 cri.go:89] found id: ""
	I1206 10:10:21.194356  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.194380  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:21.194398  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:21.194490  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:21.219471  293728 cri.go:89] found id: ""
	I1206 10:10:21.219599  293728 logs.go:282] 0 containers: []
	W1206 10:10:21.219653  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:21.219679  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:21.219706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:21.277216  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:21.277252  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:21.291736  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:21.291766  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:21.366215  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:21.357353    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.358173    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.359989    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.360738    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.362264    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:21.357353    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.358173    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.359989    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.360738    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:21.362264    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:21.366236  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:21.366250  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:21.392405  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:21.392437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:23.923653  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:23.934595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:23.934670  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:23.961107  293728 cri.go:89] found id: ""
	I1206 10:10:23.961130  293728 logs.go:282] 0 containers: []
	W1206 10:10:23.961138  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:23.961145  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:23.961209  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:23.994692  293728 cri.go:89] found id: ""
	I1206 10:10:23.994729  293728 logs.go:282] 0 containers: []
	W1206 10:10:23.994739  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:23.994745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:23.994817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:24.028605  293728 cri.go:89] found id: ""
	I1206 10:10:24.028689  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.028715  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:24.028735  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:24.028848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:24.057290  293728 cri.go:89] found id: ""
	I1206 10:10:24.057317  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.057326  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:24.057333  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:24.057400  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:24.085994  293728 cri.go:89] found id: ""
	I1206 10:10:24.086029  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.086039  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:24.086045  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:24.086128  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:24.112798  293728 cri.go:89] found id: ""
	I1206 10:10:24.112826  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.112835  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:24.112841  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:24.112930  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:24.139149  293728 cri.go:89] found id: ""
	I1206 10:10:24.139175  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.139184  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:24.139190  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:24.139300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:24.165213  293728 cri.go:89] found id: ""
	I1206 10:10:24.165239  293728 logs.go:282] 0 containers: []
	W1206 10:10:24.165248  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:24.165257  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:24.165268  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:24.223441  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:24.223477  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:24.237256  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:24.237282  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:24.303131  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:24.295355    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.295806    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297324    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297646    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.299178    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:24.295355    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.295806    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297324    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.297646    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:24.299178    8461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:24.303154  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:24.303170  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:24.329120  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:24.329160  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:26.857977  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:26.868844  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:26.868920  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:26.893530  293728 cri.go:89] found id: ""
	I1206 10:10:26.893555  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.893563  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:26.893569  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:26.893628  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:26.922692  293728 cri.go:89] found id: ""
	I1206 10:10:26.922718  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.922727  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:26.922733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:26.922794  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:26.948535  293728 cri.go:89] found id: ""
	I1206 10:10:26.948560  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.948569  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:26.948575  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:26.948640  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:26.976097  293728 cri.go:89] found id: ""
	I1206 10:10:26.976167  293728 logs.go:282] 0 containers: []
	W1206 10:10:26.976193  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:26.976212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:26.976300  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:27.010083  293728 cri.go:89] found id: ""
	I1206 10:10:27.010161  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.010184  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:27.010229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:27.010333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:27.038839  293728 cri.go:89] found id: ""
	I1206 10:10:27.038913  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.038934  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:27.038954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:27.039084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:27.066982  293728 cri.go:89] found id: ""
	I1206 10:10:27.067063  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.067086  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:27.067105  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:27.067216  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:27.092863  293728 cri.go:89] found id: ""
	I1206 10:10:27.092891  293728 logs.go:282] 0 containers: []
	W1206 10:10:27.092899  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:27.092909  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:27.092950  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:27.120341  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:27.120375  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:27.177452  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:27.177489  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:27.191505  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:27.191533  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:27.260108  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:27.251592    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.252285    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.253999    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.254325    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.255968    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:27.251592    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.252285    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.253999    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.254325    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:27.255968    8583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:27.260129  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:27.260141  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:29.785293  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:29.795873  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:29.795947  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:29.826896  293728 cri.go:89] found id: ""
	I1206 10:10:29.826934  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.826944  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:29.826950  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:29.827093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:29.857768  293728 cri.go:89] found id: ""
	I1206 10:10:29.857793  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.857803  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:29.857809  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:29.857881  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:29.885651  293728 cri.go:89] found id: ""
	I1206 10:10:29.885686  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.885696  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:29.885721  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:29.885805  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:29.910764  293728 cri.go:89] found id: ""
	I1206 10:10:29.910892  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.910916  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:29.910928  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:29.911014  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:29.937166  293728 cri.go:89] found id: ""
	I1206 10:10:29.937191  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.937201  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:29.937208  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:29.937270  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:29.962684  293728 cri.go:89] found id: ""
	I1206 10:10:29.962717  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.962726  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:29.962733  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:29.962799  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:29.993702  293728 cri.go:89] found id: ""
	I1206 10:10:29.993776  293728 logs.go:282] 0 containers: []
	W1206 10:10:29.993799  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:29.993818  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:29.993904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:30.061338  293728 cri.go:89] found id: ""
	I1206 10:10:30.061423  293728 logs.go:282] 0 containers: []
	W1206 10:10:30.061447  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:30.061482  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:30.061514  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:30.110307  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:30.110344  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:30.178825  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:30.178864  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:30.194614  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:30.194641  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:30.269484  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:30.258437    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.259022    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.261951    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263145    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263843    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:30.258437    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.259022    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.261951    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263145    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:30.263843    8698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:30.269507  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:30.269521  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:32.796483  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:32.807219  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:32.807347  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:32.832338  293728 cri.go:89] found id: ""
	I1206 10:10:32.832365  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.832374  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:32.832381  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:32.832443  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:32.857737  293728 cri.go:89] found id: ""
	I1206 10:10:32.857763  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.857771  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:32.857780  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:32.857840  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:32.886514  293728 cri.go:89] found id: ""
	I1206 10:10:32.886537  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.886546  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:32.886553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:32.886622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:32.916133  293728 cri.go:89] found id: ""
	I1206 10:10:32.916157  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.916166  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:32.916172  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:32.916278  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:32.940460  293728 cri.go:89] found id: ""
	I1206 10:10:32.940485  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.940493  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:32.940500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:32.940580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:32.967101  293728 cri.go:89] found id: ""
	I1206 10:10:32.967129  293728 logs.go:282] 0 containers: []
	W1206 10:10:32.967139  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:32.967146  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:32.967255  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:33.003657  293728 cri.go:89] found id: ""
	I1206 10:10:33.003687  293728 logs.go:282] 0 containers: []
	W1206 10:10:33.003696  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:33.003703  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:33.003817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:33.034541  293728 cri.go:89] found id: ""
	I1206 10:10:33.034570  293728 logs.go:282] 0 containers: []
	W1206 10:10:33.034579  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:33.034587  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:33.034599  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:33.103182  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:33.094513    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.095149    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.096956    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.097426    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.099078    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:33.094513    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.095149    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.096956    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.097426    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:33.099078    8794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:33.103205  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:33.103219  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:33.129473  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:33.129508  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:33.158555  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:33.158583  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:33.216375  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:33.216409  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:35.730137  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:35.743050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:35.743211  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:35.782795  293728 cri.go:89] found id: ""
	I1206 10:10:35.782873  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.782897  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:35.782917  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:35.783049  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:35.810026  293728 cri.go:89] found id: ""
	I1206 10:10:35.810102  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.810126  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:35.810144  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:35.810234  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:35.835162  293728 cri.go:89] found id: ""
	I1206 10:10:35.835240  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.835265  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:35.835286  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:35.835412  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:35.860195  293728 cri.go:89] found id: ""
	I1206 10:10:35.860227  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.860236  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:35.860247  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:35.860386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:35.886939  293728 cri.go:89] found id: ""
	I1206 10:10:35.886977  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.886995  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:35.887003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:35.887093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:35.917822  293728 cri.go:89] found id: ""
	I1206 10:10:35.917848  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.917858  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:35.917864  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:35.917944  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:35.945452  293728 cri.go:89] found id: ""
	I1206 10:10:35.945478  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.945488  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:35.945494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:35.945556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:35.986146  293728 cri.go:89] found id: ""
	I1206 10:10:35.986174  293728 logs.go:282] 0 containers: []
	W1206 10:10:35.986183  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:35.986193  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:35.986204  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:36.053722  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:36.053759  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:36.068786  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:36.068815  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:36.132981  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:36.124259    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.124911    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.126650    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.127348    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.128990    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:36.124259    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.124911    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.126650    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.127348    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:36.128990    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:36.133005  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:36.133018  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:36.158971  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:36.159009  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:38.688989  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:38.699954  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:38.700025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:38.732646  293728 cri.go:89] found id: ""
	I1206 10:10:38.732680  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.732689  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:38.732696  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:38.732757  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:38.760849  293728 cri.go:89] found id: ""
	I1206 10:10:38.760878  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.760888  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:38.760894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:38.760952  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:38.793233  293728 cri.go:89] found id: ""
	I1206 10:10:38.793258  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.793267  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:38.793274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:38.793355  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:38.818786  293728 cri.go:89] found id: ""
	I1206 10:10:38.818814  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.818823  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:38.818831  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:38.818925  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:38.845346  293728 cri.go:89] found id: ""
	I1206 10:10:38.845373  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.845382  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:38.845388  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:38.845449  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:38.876064  293728 cri.go:89] found id: ""
	I1206 10:10:38.876088  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.876097  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:38.876103  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:38.876193  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:38.901010  293728 cri.go:89] found id: ""
	I1206 10:10:38.901037  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.901046  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:38.901053  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:38.901121  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:38.931159  293728 cri.go:89] found id: ""
	I1206 10:10:38.931185  293728 logs.go:282] 0 containers: []
	W1206 10:10:38.931194  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:38.931203  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:38.931214  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:38.945219  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:38.945247  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:39.040279  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:39.031608    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.032449    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034282    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034607    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.036094    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:39.031608    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.032449    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034282    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.034607    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:39.036094    9016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:39.040303  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:39.040315  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:39.069669  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:39.069709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:39.102102  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:39.102133  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:41.662114  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:41.674379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:41.674461  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:41.700812  293728 cri.go:89] found id: ""
	I1206 10:10:41.700836  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.700846  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:41.700852  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:41.700945  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:41.732717  293728 cri.go:89] found id: ""
	I1206 10:10:41.732744  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.732753  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:41.732759  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:41.732818  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:41.765582  293728 cri.go:89] found id: ""
	I1206 10:10:41.765609  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.765618  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:41.765624  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:41.765684  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:41.795133  293728 cri.go:89] found id: ""
	I1206 10:10:41.795160  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.795169  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:41.795178  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:41.795240  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:41.824848  293728 cri.go:89] found id: ""
	I1206 10:10:41.824876  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.824885  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:41.824894  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:41.825002  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:41.850710  293728 cri.go:89] found id: ""
	I1206 10:10:41.850738  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.850748  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:41.850754  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:41.850817  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:41.876689  293728 cri.go:89] found id: ""
	I1206 10:10:41.876714  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.876723  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:41.876730  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:41.876837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:41.910933  293728 cri.go:89] found id: ""
	I1206 10:10:41.910958  293728 logs.go:282] 0 containers: []
	W1206 10:10:41.910967  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:41.910977  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:41.910988  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:41.940383  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:41.940411  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:42.002369  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:42.002465  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:42.036193  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:42.036220  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:42.116431  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:42.104500    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.106090    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.107160    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.108051    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.110987    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:42.104500    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.106090    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.107160    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.108051    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:42.110987    9142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:42.116466  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:42.116485  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:44.645750  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:44.657010  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:44.657087  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:44.681487  293728 cri.go:89] found id: ""
	I1206 10:10:44.681511  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.681520  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:44.681526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:44.681632  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:44.707007  293728 cri.go:89] found id: ""
	I1206 10:10:44.707032  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.707059  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:44.707065  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:44.707124  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:44.740358  293728 cri.go:89] found id: ""
	I1206 10:10:44.740384  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.740394  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:44.740400  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:44.740462  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:44.774979  293728 cri.go:89] found id: ""
	I1206 10:10:44.775005  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.775013  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:44.775020  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:44.775099  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:44.802733  293728 cri.go:89] found id: ""
	I1206 10:10:44.802759  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.802768  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:44.802774  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:44.802836  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:44.830059  293728 cri.go:89] found id: ""
	I1206 10:10:44.830082  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.830091  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:44.830104  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:44.830164  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:44.857962  293728 cri.go:89] found id: ""
	I1206 10:10:44.857988  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.857997  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:44.858003  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:44.858062  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:44.882971  293728 cri.go:89] found id: ""
	I1206 10:10:44.882993  293728 logs.go:282] 0 containers: []
	W1206 10:10:44.883002  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:44.883011  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:44.883021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:44.939214  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:44.939249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:44.953046  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:44.953074  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:45.078537  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:45.068034    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069216    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069914    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.072098    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.073533    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:45.068034    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069216    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.069914    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.072098    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:45.073533    9244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:45.078570  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:45.078586  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:45.108352  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:45.108392  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:47.660188  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:47.670914  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:47.670992  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:47.695337  293728 cri.go:89] found id: ""
	I1206 10:10:47.695363  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.695417  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:47.695425  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:47.695496  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:47.728763  293728 cri.go:89] found id: ""
	I1206 10:10:47.728834  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.728855  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:47.728877  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:47.728982  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:47.755564  293728 cri.go:89] found id: ""
	I1206 10:10:47.755640  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.755663  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:47.755683  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:47.755794  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:47.786763  293728 cri.go:89] found id: ""
	I1206 10:10:47.786838  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.786869  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:47.786892  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:47.786999  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:47.813109  293728 cri.go:89] found id: ""
	I1206 10:10:47.813187  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.813209  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:47.813227  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:47.813312  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:47.839872  293728 cri.go:89] found id: ""
	I1206 10:10:47.839947  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.839963  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:47.839971  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:47.840029  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:47.864803  293728 cri.go:89] found id: ""
	I1206 10:10:47.864827  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.864835  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:47.864842  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:47.864908  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:47.893715  293728 cri.go:89] found id: ""
	I1206 10:10:47.893740  293728 logs.go:282] 0 containers: []
	W1206 10:10:47.893749  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:47.893759  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:47.893770  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:47.962240  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:47.954010    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.954579    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956159    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956626    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.958129    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:47.954010    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.954579    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956159    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.956626    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:47.958129    9349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:47.962263  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:47.962275  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:47.988774  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:47.988808  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:48.022271  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:48.022301  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:48.088564  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:48.088601  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:50.605005  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:50.615765  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:50.615847  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:50.641365  293728 cri.go:89] found id: ""
	I1206 10:10:50.641389  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.641397  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:50.641404  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:50.641468  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:50.665749  293728 cri.go:89] found id: ""
	I1206 10:10:50.665775  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.665784  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:50.665790  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:50.665848  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:50.693092  293728 cri.go:89] found id: ""
	I1206 10:10:50.693117  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.693133  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:50.693139  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:50.693198  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:50.721292  293728 cri.go:89] found id: ""
	I1206 10:10:50.721319  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.721328  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:50.721335  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:50.721394  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:50.757580  293728 cri.go:89] found id: ""
	I1206 10:10:50.757608  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.757617  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:50.757623  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:50.757681  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:50.795246  293728 cri.go:89] found id: ""
	I1206 10:10:50.795275  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.795284  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:50.795290  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:50.795352  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:50.831466  293728 cri.go:89] found id: ""
	I1206 10:10:50.831489  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.831497  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:50.831503  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:50.831563  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:50.856692  293728 cri.go:89] found id: ""
	I1206 10:10:50.856719  293728 logs.go:282] 0 containers: []
	W1206 10:10:50.856728  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:50.856737  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:50.856748  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:50.914369  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:50.914404  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:50.928218  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:50.928249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:51.001552  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:50.990416    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.991460    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.992543    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.993284    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.996113    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:50.990416    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.991460    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.992543    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.993284    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:50.996113    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:51.001649  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:51.001679  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:51.035670  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:51.035706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:53.568268  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:53.579523  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:53.579600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:53.605604  293728 cri.go:89] found id: ""
	I1206 10:10:53.605626  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.605636  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:53.605642  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:53.605704  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:53.632535  293728 cri.go:89] found id: ""
	I1206 10:10:53.632558  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.632566  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:53.632573  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:53.632633  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:53.664459  293728 cri.go:89] found id: ""
	I1206 10:10:53.664485  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.664494  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:53.664500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:53.664561  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:53.689200  293728 cri.go:89] found id: ""
	I1206 10:10:53.689227  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.689235  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:53.689242  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:53.689303  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:53.724364  293728 cri.go:89] found id: ""
	I1206 10:10:53.724391  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.724401  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:53.724408  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:53.724489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:53.760957  293728 cri.go:89] found id: ""
	I1206 10:10:53.760985  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.760995  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:53.761002  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:53.761065  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:53.795256  293728 cri.go:89] found id: ""
	I1206 10:10:53.795417  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.795469  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:53.795490  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:53.795618  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:53.820946  293728 cri.go:89] found id: ""
	I1206 10:10:53.821014  293728 logs.go:282] 0 containers: []
	W1206 10:10:53.821028  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:53.821038  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:53.821049  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:53.850603  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:53.850632  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:53.910568  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:53.910606  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:53.924408  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:53.924435  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:53.993865  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:53.984800    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.985669    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987623    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987938    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.989469    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:53.984800    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.985669    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987623    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.987938    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:53.989469    9588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:53.993926  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:53.993964  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:56.525953  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:56.537170  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:56.537251  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:56.562800  293728 cri.go:89] found id: ""
	I1206 10:10:56.562825  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.562834  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:56.562841  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:56.562903  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:56.589000  293728 cri.go:89] found id: ""
	I1206 10:10:56.589032  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.589042  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:56.589048  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:56.589108  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:56.613252  293728 cri.go:89] found id: ""
	I1206 10:10:56.613276  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.613284  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:56.613291  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:56.613354  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:56.643136  293728 cri.go:89] found id: ""
	I1206 10:10:56.643176  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.643186  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:56.643193  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:56.643265  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:56.669515  293728 cri.go:89] found id: ""
	I1206 10:10:56.669539  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.669547  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:56.669554  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:56.669613  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:56.694989  293728 cri.go:89] found id: ""
	I1206 10:10:56.695013  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.695022  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:56.695028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:56.695295  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:56.733872  293728 cri.go:89] found id: ""
	I1206 10:10:56.733898  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.733907  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:56.733914  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:56.733981  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:56.768700  293728 cri.go:89] found id: ""
	I1206 10:10:56.768725  293728 logs.go:282] 0 containers: []
	W1206 10:10:56.768734  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:56.768745  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:56.768765  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:10:56.801786  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:56.801812  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:56.857425  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:56.857458  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:56.870898  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:56.870929  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:56.939737  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:56.930826    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.931761    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933321    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933912    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.935699    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:56.930826    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.931761    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933321    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.933912    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:56.935699    9697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:56.939814  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:56.939833  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:59.467303  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:10:59.479788  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:10:59.479913  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:10:59.507178  293728 cri.go:89] found id: ""
	I1206 10:10:59.507214  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.507223  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:10:59.507229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:10:59.507307  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:10:59.532362  293728 cri.go:89] found id: ""
	I1206 10:10:59.532435  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.532460  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:10:59.532478  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:10:59.532565  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:10:59.561793  293728 cri.go:89] found id: ""
	I1206 10:10:59.561869  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.561893  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:10:59.561912  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:10:59.562006  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:10:59.587885  293728 cri.go:89] found id: ""
	I1206 10:10:59.587914  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.587933  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:10:59.587955  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:10:59.588043  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:10:59.616632  293728 cri.go:89] found id: ""
	I1206 10:10:59.616701  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.616723  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:10:59.616741  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:10:59.616828  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:10:59.641907  293728 cri.go:89] found id: ""
	I1206 10:10:59.641942  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.641950  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:10:59.641957  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:10:59.642030  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:10:59.666146  293728 cri.go:89] found id: ""
	I1206 10:10:59.666181  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.666190  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:10:59.666197  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:10:59.666267  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:10:59.690454  293728 cri.go:89] found id: ""
	I1206 10:10:59.690525  293728 logs.go:282] 0 containers: []
	W1206 10:10:59.690549  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:10:59.690571  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:10:59.690606  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:10:59.747565  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:10:59.747602  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:10:59.761979  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:10:59.762033  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:10:59.832718  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:10:59.824094    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825243    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825921    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827020    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827705    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:10:59.824094    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825243    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.825921    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827020    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:10:59.827705    9799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:10:59.832743  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:10:59.832755  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:10:59.858330  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:10:59.858360  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:02.390395  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:02.401485  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:02.401558  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:02.427611  293728 cri.go:89] found id: ""
	I1206 10:11:02.427638  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.427647  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:02.427654  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:02.427729  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:02.454049  293728 cri.go:89] found id: ""
	I1206 10:11:02.454078  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.454087  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:02.454093  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:02.454154  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:02.480392  293728 cri.go:89] found id: ""
	I1206 10:11:02.480417  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.480425  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:02.480431  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:02.480489  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:02.506546  293728 cri.go:89] found id: ""
	I1206 10:11:02.506572  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.506581  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:02.506587  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:02.506647  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:02.531917  293728 cri.go:89] found id: ""
	I1206 10:11:02.531954  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.531963  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:02.531979  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:02.532097  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:02.559738  293728 cri.go:89] found id: ""
	I1206 10:11:02.559759  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.559768  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:02.559774  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:02.559834  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:02.584556  293728 cri.go:89] found id: ""
	I1206 10:11:02.584578  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.584587  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:02.584593  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:02.584652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:02.617108  293728 cri.go:89] found id: ""
	I1206 10:11:02.617164  293728 logs.go:282] 0 containers: []
	W1206 10:11:02.617174  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:02.617183  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:02.617199  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:02.645764  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:02.645802  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:02.675285  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:02.675317  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:02.733222  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:02.733262  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:02.747026  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:02.747069  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:02.827017  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:02.817993    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.818819    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.820650    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.821248    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.822937    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:02.817993    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.818819    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.820650    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.821248    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:02.822937    9926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:05.327889  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:05.338718  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:05.338812  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:05.363857  293728 cri.go:89] found id: ""
	I1206 10:11:05.363882  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.363892  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:05.363899  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:05.363969  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:05.389419  293728 cri.go:89] found id: ""
	I1206 10:11:05.389444  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.389453  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:05.389462  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:05.389522  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:05.416875  293728 cri.go:89] found id: ""
	I1206 10:11:05.416937  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.416952  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:05.416960  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:05.417018  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:05.445294  293728 cri.go:89] found id: ""
	I1206 10:11:05.445316  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.445325  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:05.445331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:05.445389  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:05.469930  293728 cri.go:89] found id: ""
	I1206 10:11:05.469952  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.469960  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:05.469966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:05.470023  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:05.494527  293728 cri.go:89] found id: ""
	I1206 10:11:05.494591  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.494623  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:05.494641  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:05.494712  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:05.519703  293728 cri.go:89] found id: ""
	I1206 10:11:05.519727  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.519736  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:05.519742  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:05.519802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:05.544697  293728 cri.go:89] found id: ""
	I1206 10:11:05.544721  293728 logs.go:282] 0 containers: []
	W1206 10:11:05.544729  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:05.544738  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:05.544751  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:05.558261  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:05.558288  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:05.627696  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:05.618572   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.619577   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.621405   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.622011   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.623059   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:05.618572   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.619577   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.621405   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.622011   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:05.623059   10021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:05.627760  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:05.627781  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:05.653464  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:05.653499  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:05.684619  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:05.684647  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:08.247509  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:08.260609  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:08.260730  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:08.289483  293728 cri.go:89] found id: ""
	I1206 10:11:08.289551  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.289567  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:08.289580  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:08.289640  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:08.318013  293728 cri.go:89] found id: ""
	I1206 10:11:08.318037  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.318045  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:08.318051  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:08.318110  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:08.351762  293728 cri.go:89] found id: ""
	I1206 10:11:08.351785  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.351794  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:08.351800  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:08.351858  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:08.377083  293728 cri.go:89] found id: ""
	I1206 10:11:08.377159  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.377174  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:08.377181  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:08.377240  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:08.406041  293728 cri.go:89] found id: ""
	I1206 10:11:08.406063  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.406072  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:08.406077  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:08.406135  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:08.430970  293728 cri.go:89] found id: ""
	I1206 10:11:08.430996  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.431004  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:08.431011  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:08.431096  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:08.454833  293728 cri.go:89] found id: ""
	I1206 10:11:08.454857  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.454865  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:08.454872  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:08.454931  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:08.484046  293728 cri.go:89] found id: ""
	I1206 10:11:08.484113  293728 logs.go:282] 0 containers: []
	W1206 10:11:08.484129  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:08.484139  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:08.484150  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:08.551224  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:08.542554   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.543265   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545049   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545727   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.547350   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:08.542554   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.543265   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545049   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.545727   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:08.547350   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:08.551247  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:08.551259  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:08.577706  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:08.577740  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:08.605435  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:08.605462  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:08.665984  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:08.666020  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:11.180758  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:11.193428  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:11.193501  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:11.230343  293728 cri.go:89] found id: ""
	I1206 10:11:11.230374  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.230383  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:11.230389  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:11.230452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:11.267153  293728 cri.go:89] found id: ""
	I1206 10:11:11.267177  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.267187  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:11.267193  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:11.267258  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:11.299679  293728 cri.go:89] found id: ""
	I1206 10:11:11.299708  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.299718  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:11.299724  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:11.299784  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:11.325476  293728 cri.go:89] found id: ""
	I1206 10:11:11.325503  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.325512  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:11.325518  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:11.325600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:11.351586  293728 cri.go:89] found id: ""
	I1206 10:11:11.351614  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.351624  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:11.351632  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:11.351700  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:11.377176  293728 cri.go:89] found id: ""
	I1206 10:11:11.377203  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.377212  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:11.377219  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:11.377308  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:11.402618  293728 cri.go:89] found id: ""
	I1206 10:11:11.402644  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.402652  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:11.402659  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:11.402745  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:11.429503  293728 cri.go:89] found id: ""
	I1206 10:11:11.429529  293728 logs.go:282] 0 containers: []
	W1206 10:11:11.429538  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:11.429547  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:11.429562  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:11.486599  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:11.486638  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:11.500957  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:11.500987  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:11.577987  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:11.568882   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.569760   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.571647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.572318   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.573801   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:11.568882   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.569760   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.571647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.572318   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:11.573801   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:11.578008  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:11.578021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:11.604993  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:11.605027  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:14.137875  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:14.148737  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:14.148811  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:14.173594  293728 cri.go:89] found id: ""
	I1206 10:11:14.173671  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.173695  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:14.173714  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:14.173809  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:14.200007  293728 cri.go:89] found id: ""
	I1206 10:11:14.200033  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.200043  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:14.200050  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:14.200117  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:14.233924  293728 cri.go:89] found id: ""
	I1206 10:11:14.233951  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.233959  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:14.233966  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:14.234030  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:14.264436  293728 cri.go:89] found id: ""
	I1206 10:11:14.264464  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.264474  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:14.264480  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:14.264540  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:14.292320  293728 cri.go:89] found id: ""
	I1206 10:11:14.292348  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.292359  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:14.292365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:14.292426  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:14.317612  293728 cri.go:89] found id: ""
	I1206 10:11:14.317640  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.317649  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:14.317656  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:14.317714  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:14.342496  293728 cri.go:89] found id: ""
	I1206 10:11:14.342521  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.342530  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:14.342536  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:14.342596  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:14.368247  293728 cri.go:89] found id: ""
	I1206 10:11:14.368273  293728 logs.go:282] 0 containers: []
	W1206 10:11:14.368282  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:14.368292  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:14.368304  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:14.394942  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:14.394976  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:14.428315  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:14.428345  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:14.484824  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:14.484855  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:14.498675  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:14.498705  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:14.568051  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:14.559253   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.560001   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.561736   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.562345   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.564094   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:14.559253   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.560001   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.561736   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.562345   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:14.564094   10375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:17.068293  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:17.078902  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:17.078976  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:17.103674  293728 cri.go:89] found id: ""
	I1206 10:11:17.103699  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.103708  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:17.103715  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:17.103777  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:17.139412  293728 cri.go:89] found id: ""
	I1206 10:11:17.139481  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.139503  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:17.139523  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:17.139610  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:17.168435  293728 cri.go:89] found id: ""
	I1206 10:11:17.168461  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.168470  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:17.168476  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:17.168568  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:17.198788  293728 cri.go:89] found id: ""
	I1206 10:11:17.198854  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.198879  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:17.198898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:17.198983  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:17.233132  293728 cri.go:89] found id: ""
	I1206 10:11:17.233218  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.233242  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:17.233262  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:17.233356  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:17.268547  293728 cri.go:89] found id: ""
	I1206 10:11:17.268613  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.268637  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:17.268655  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:17.268741  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:17.303935  293728 cri.go:89] found id: ""
	I1206 10:11:17.303957  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.303966  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:17.303972  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:17.304032  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:17.328050  293728 cri.go:89] found id: ""
	I1206 10:11:17.328074  293728 logs.go:282] 0 containers: []
	W1206 10:11:17.328084  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:17.328092  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:17.328139  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:17.387715  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:17.387750  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:17.401545  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:17.401576  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:17.467905  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:17.459187   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.459639   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461308   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461736   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.463309   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:17.459187   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.459639   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461308   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.461736   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:17.463309   10475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:17.467927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:17.467939  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:17.493972  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:17.494007  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:20.027522  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:20.040220  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:20.040323  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:20.068566  293728 cri.go:89] found id: ""
	I1206 10:11:20.068592  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.068602  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:20.068610  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:20.068691  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:20.096577  293728 cri.go:89] found id: ""
	I1206 10:11:20.096616  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.096626  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:20.096633  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:20.096791  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:20.125150  293728 cri.go:89] found id: ""
	I1206 10:11:20.125175  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.125185  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:20.125192  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:20.125253  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:20.151199  293728 cri.go:89] found id: ""
	I1206 10:11:20.151225  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.151234  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:20.151241  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:20.151303  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:20.177323  293728 cri.go:89] found id: ""
	I1206 10:11:20.177349  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.177359  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:20.177365  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:20.177454  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:20.207914  293728 cri.go:89] found id: ""
	I1206 10:11:20.207940  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.207950  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:20.207956  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:20.208015  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:20.250213  293728 cri.go:89] found id: ""
	I1206 10:11:20.250247  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.250256  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:20.250265  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:20.250336  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:20.284320  293728 cri.go:89] found id: ""
	I1206 10:11:20.284356  293728 logs.go:282] 0 containers: []
	W1206 10:11:20.284365  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:20.284374  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:20.284384  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:20.317496  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:20.317524  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:20.373988  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:20.374021  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:20.387702  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:20.387728  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:20.454347  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:20.446421   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.447014   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448572   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448979   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.450465   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:20.446421   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.447014   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448572   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.448979   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:20.450465   10598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:20.454370  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:20.454383  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:22.980202  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:22.991835  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:22.991961  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:23.022299  293728 cri.go:89] found id: ""
	I1206 10:11:23.022379  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.022404  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:23.022423  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:23.022532  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:23.055611  293728 cri.go:89] found id: ""
	I1206 10:11:23.055634  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.055643  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:23.055649  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:23.055708  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:23.080752  293728 cri.go:89] found id: ""
	I1206 10:11:23.080828  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.080850  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:23.080870  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:23.080965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:23.106107  293728 cri.go:89] found id: ""
	I1206 10:11:23.106134  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.106143  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:23.106150  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:23.106212  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:23.132303  293728 cri.go:89] found id: ""
	I1206 10:11:23.132327  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.132335  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:23.132342  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:23.132408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:23.156632  293728 cri.go:89] found id: ""
	I1206 10:11:23.156697  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.156712  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:23.156719  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:23.156775  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:23.180697  293728 cri.go:89] found id: ""
	I1206 10:11:23.180764  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.180777  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:23.180784  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:23.180842  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:23.208267  293728 cri.go:89] found id: ""
	I1206 10:11:23.208341  293728 logs.go:282] 0 containers: []
	W1206 10:11:23.208364  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:23.208387  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:23.208425  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:23.292598  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:23.283687   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.284573   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.286441   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.287115   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.288724   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:23.283687   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.284573   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.286441   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.287115   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:23.288724   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:23.292618  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:23.292631  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:23.318604  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:23.318641  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:23.352649  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:23.352676  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:23.411769  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:23.411803  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:25.925870  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:25.936619  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:25.936701  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:25.963699  293728 cri.go:89] found id: ""
	I1206 10:11:25.963722  293728 logs.go:282] 0 containers: []
	W1206 10:11:25.963731  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:25.963738  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:25.963802  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:25.995991  293728 cri.go:89] found id: ""
	I1206 10:11:25.996066  293728 logs.go:282] 0 containers: []
	W1206 10:11:25.996088  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:25.996106  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:25.996196  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:26.030700  293728 cri.go:89] found id: ""
	I1206 10:11:26.030728  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.030738  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:26.030745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:26.030809  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:26.066012  293728 cri.go:89] found id: ""
	I1206 10:11:26.066044  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.066054  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:26.066060  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:26.066125  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:26.092723  293728 cri.go:89] found id: ""
	I1206 10:11:26.092753  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.092763  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:26.092769  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:26.092837  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:26.120031  293728 cri.go:89] found id: ""
	I1206 10:11:26.120108  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.120125  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:26.120132  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:26.120198  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:26.147104  293728 cri.go:89] found id: ""
	I1206 10:11:26.147131  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.147152  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:26.147158  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:26.147257  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:26.173188  293728 cri.go:89] found id: ""
	I1206 10:11:26.173212  293728 logs.go:282] 0 containers: []
	W1206 10:11:26.173221  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:26.173230  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:26.173273  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:26.259536  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:26.250765   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.251710   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253385   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253690   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.255208   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:26.250765   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.251710   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253385   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.253690   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:26.255208   10803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:26.259581  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:26.259596  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:26.288770  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:26.288853  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:26.318991  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:26.319082  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:26.377710  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:26.377743  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:28.892920  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:28.903557  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:28.903622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:28.928667  293728 cri.go:89] found id: ""
	I1206 10:11:28.928691  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.928699  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:28.928707  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:28.928767  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:28.953528  293728 cri.go:89] found id: ""
	I1206 10:11:28.953554  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.953562  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:28.953568  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:28.953626  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:28.981995  293728 cri.go:89] found id: ""
	I1206 10:11:28.982022  293728 logs.go:282] 0 containers: []
	W1206 10:11:28.982031  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:28.982037  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:28.982101  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:29.021133  293728 cri.go:89] found id: ""
	I1206 10:11:29.021161  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.021170  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:29.021177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:29.021244  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:29.051961  293728 cri.go:89] found id: ""
	I1206 10:11:29.052044  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.052056  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:29.052063  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:29.052157  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:29.076239  293728 cri.go:89] found id: ""
	I1206 10:11:29.076260  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.076268  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:29.076274  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:29.076331  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:29.100533  293728 cri.go:89] found id: ""
	I1206 10:11:29.100568  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.100577  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:29.100583  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:29.100642  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:29.125877  293728 cri.go:89] found id: ""
	I1206 10:11:29.125900  293728 logs.go:282] 0 containers: []
	W1206 10:11:29.125909  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:29.125917  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:29.125929  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:29.184407  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:29.184441  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:29.198478  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:29.198553  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:29.291075  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:29.280844   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.281788   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285131   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285582   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.287240   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:29.280844   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.281788   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285131   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.285582   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:29.287240   10919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:29.291096  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:29.291109  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:29.317026  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:29.317059  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:31.845985  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:31.857066  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:31.857145  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:31.882982  293728 cri.go:89] found id: ""
	I1206 10:11:31.883059  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.883081  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:31.883101  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:31.883187  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:31.908108  293728 cri.go:89] found id: ""
	I1206 10:11:31.908138  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.908148  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:31.908154  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:31.908244  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:31.933164  293728 cri.go:89] found id: ""
	I1206 10:11:31.933188  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.933197  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:31.933204  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:31.933261  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:31.961760  293728 cri.go:89] found id: ""
	I1206 10:11:31.961784  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.961792  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:31.961798  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:31.961864  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:31.993806  293728 cri.go:89] found id: ""
	I1206 10:11:31.993836  293728 logs.go:282] 0 containers: []
	W1206 10:11:31.993845  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:31.993851  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:31.993915  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:32.025453  293728 cri.go:89] found id: ""
	I1206 10:11:32.025480  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.025489  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:32.025496  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:32.025556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:32.053138  293728 cri.go:89] found id: ""
	I1206 10:11:32.053160  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.053171  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:32.053177  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:32.053236  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:32.084984  293728 cri.go:89] found id: ""
	I1206 10:11:32.085009  293728 logs.go:282] 0 containers: []
	W1206 10:11:32.085018  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:32.085027  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:32.085058  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:32.113246  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:32.113276  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:32.170516  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:32.170553  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:32.184767  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:32.184797  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:32.266194  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:32.257320   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.258649   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.259490   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.260223   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.261917   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:32.257320   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.258649   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.259490   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.260223   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:32.261917   11049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:32.266261  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:32.266289  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:34.798474  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:34.809168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:34.809239  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:34.837292  293728 cri.go:89] found id: ""
	I1206 10:11:34.837314  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.837322  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:34.837329  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:34.837387  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:34.863331  293728 cri.go:89] found id: ""
	I1206 10:11:34.863353  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.863362  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:34.863369  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:34.863465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:34.893355  293728 cri.go:89] found id: ""
	I1206 10:11:34.893379  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.893388  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:34.893395  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:34.893452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:34.919127  293728 cri.go:89] found id: ""
	I1206 10:11:34.919153  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.919162  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:34.919169  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:34.919228  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:34.948423  293728 cri.go:89] found id: ""
	I1206 10:11:34.948448  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.948458  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:34.948467  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:34.948526  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:34.984476  293728 cri.go:89] found id: ""
	I1206 10:11:34.984503  293728 logs.go:282] 0 containers: []
	W1206 10:11:34.984513  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:34.984520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:34.984579  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:35.017804  293728 cri.go:89] found id: ""
	I1206 10:11:35.017831  293728 logs.go:282] 0 containers: []
	W1206 10:11:35.017840  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:35.017847  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:35.017955  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:35.049243  293728 cri.go:89] found id: ""
	I1206 10:11:35.049270  293728 logs.go:282] 0 containers: []
	W1206 10:11:35.049279  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:35.049288  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:35.049300  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:35.109333  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:35.109371  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:35.123612  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:35.123643  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:35.191474  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:35.181616   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.182533   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184226   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184809   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.186401   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:35.181616   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.182533   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184226   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.184809   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:35.186401   11150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:35.191495  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:35.191509  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:35.217926  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:35.218007  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:37.758372  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:37.769553  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:37.769625  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:37.799573  293728 cri.go:89] found id: ""
	I1206 10:11:37.799606  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.799617  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:37.799626  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:37.799697  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:37.828542  293728 cri.go:89] found id: ""
	I1206 10:11:37.828580  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.828589  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:37.828595  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:37.828670  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:37.854197  293728 cri.go:89] found id: ""
	I1206 10:11:37.854223  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.854233  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:37.854239  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:37.854299  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:37.879147  293728 cri.go:89] found id: ""
	I1206 10:11:37.879220  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.879243  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:37.879261  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:37.879346  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:37.905390  293728 cri.go:89] found id: ""
	I1206 10:11:37.905412  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.905421  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:37.905428  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:37.905533  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:37.933187  293728 cri.go:89] found id: ""
	I1206 10:11:37.933251  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.933266  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:37.933273  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:37.933333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:37.957719  293728 cri.go:89] found id: ""
	I1206 10:11:37.957743  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.957756  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:37.957763  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:37.957823  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:37.991726  293728 cri.go:89] found id: ""
	I1206 10:11:37.991755  293728 logs.go:282] 0 containers: []
	W1206 10:11:37.991765  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:37.991775  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:37.991787  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:38.072266  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:38.063102   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.063715   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.065465   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.066011   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.067888   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:38.063102   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.063715   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.065465   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.066011   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:38.067888   11253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:38.072293  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:38.072308  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:38.100264  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:38.100302  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:38.128959  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:38.128989  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:38.186487  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:38.186517  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:40.700896  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:40.711768  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:40.711841  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:40.737641  293728 cri.go:89] found id: ""
	I1206 10:11:40.737664  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.737675  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:40.737681  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:40.737740  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:40.763410  293728 cri.go:89] found id: ""
	I1206 10:11:40.763437  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.763447  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:40.763453  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:40.763521  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:40.788254  293728 cri.go:89] found id: ""
	I1206 10:11:40.788277  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.788287  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:40.788293  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:40.788351  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:40.812429  293728 cri.go:89] found id: ""
	I1206 10:11:40.812454  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.812464  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:40.812470  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:40.812577  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:40.836598  293728 cri.go:89] found id: ""
	I1206 10:11:40.836623  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.836632  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:40.836639  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:40.836699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:40.865558  293728 cri.go:89] found id: ""
	I1206 10:11:40.865584  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.865593  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:40.865600  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:40.865658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:40.890394  293728 cri.go:89] found id: ""
	I1206 10:11:40.890419  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.890428  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:40.890434  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:40.890494  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:40.919443  293728 cri.go:89] found id: ""
	I1206 10:11:40.919471  293728 logs.go:282] 0 containers: []
	W1206 10:11:40.919480  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:40.919489  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:40.919501  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:40.932761  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:40.932788  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:41.018904  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:41.007696   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.008625   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.010702   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.011857   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.013002   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:41.007696   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.008625   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.010702   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.011857   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:41.013002   11362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:41.018927  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:41.018942  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:41.049613  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:41.049648  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:41.077525  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:41.077552  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:43.637314  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:43.648009  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:43.648084  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:43.673268  293728 cri.go:89] found id: ""
	I1206 10:11:43.673291  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.673299  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:43.673306  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:43.673363  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:43.698533  293728 cri.go:89] found id: ""
	I1206 10:11:43.698563  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.698573  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:43.698579  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:43.698666  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:43.726409  293728 cri.go:89] found id: ""
	I1206 10:11:43.726434  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.726443  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:43.726449  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:43.726524  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:43.753336  293728 cri.go:89] found id: ""
	I1206 10:11:43.753361  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.753371  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:43.753377  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:43.753468  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:43.778503  293728 cri.go:89] found id: ""
	I1206 10:11:43.778526  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.778535  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:43.778541  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:43.778622  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:43.806530  293728 cri.go:89] found id: ""
	I1206 10:11:43.806554  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.806564  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:43.806570  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:43.806652  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:43.831543  293728 cri.go:89] found id: ""
	I1206 10:11:43.831570  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.831579  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:43.831585  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:43.831644  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:43.856767  293728 cri.go:89] found id: ""
	I1206 10:11:43.856791  293728 logs.go:282] 0 containers: []
	W1206 10:11:43.856800  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:43.856808  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:43.856821  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:43.926714  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:43.918532   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.919086   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.920754   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.921218   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.922816   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:43.918532   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.919086   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.920754   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.921218   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:43.922816   11466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:43.926736  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:43.926751  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:43.953140  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:43.953176  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:43.986579  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:43.986611  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:44.046797  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:44.046832  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:46.561087  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:46.574475  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:46.574548  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:46.603568  293728 cri.go:89] found id: ""
	I1206 10:11:46.603593  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.603601  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:46.603608  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:46.603688  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:46.629999  293728 cri.go:89] found id: ""
	I1206 10:11:46.630024  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.630034  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:46.630040  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:46.630120  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:46.657373  293728 cri.go:89] found id: ""
	I1206 10:11:46.657399  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.657408  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:46.657414  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:46.657472  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:46.682131  293728 cri.go:89] found id: ""
	I1206 10:11:46.682157  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.682166  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:46.682172  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:46.682229  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:46.712112  293728 cri.go:89] found id: ""
	I1206 10:11:46.712184  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.712201  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:46.712209  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:46.712273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:46.737272  293728 cri.go:89] found id: ""
	I1206 10:11:46.737308  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.737317  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:46.737323  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:46.737402  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:46.762747  293728 cri.go:89] found id: ""
	I1206 10:11:46.762773  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.762782  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:46.762814  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:46.762904  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:46.789056  293728 cri.go:89] found id: ""
	I1206 10:11:46.789092  293728 logs.go:282] 0 containers: []
	W1206 10:11:46.789101  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:46.789110  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:46.789122  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:46.852031  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:46.843591   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.844469   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846096   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846414   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.847930   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:46.843591   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.844469   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846096   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.846414   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:46.847930   11578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:46.852055  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:46.852068  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:46.878458  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:46.878490  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:46.909497  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:46.909523  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:46.966671  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:46.966706  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:49.484723  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:49.499040  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:49.499143  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:49.530152  293728 cri.go:89] found id: ""
	I1206 10:11:49.530195  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.530204  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:49.530228  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:49.530311  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:49.556277  293728 cri.go:89] found id: ""
	I1206 10:11:49.556302  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.556311  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:49.556317  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:49.556422  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:49.582278  293728 cri.go:89] found id: ""
	I1206 10:11:49.582303  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.582312  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:49.582318  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:49.582386  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:49.608504  293728 cri.go:89] found id: ""
	I1206 10:11:49.608529  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.608538  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:49.608544  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:49.608624  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:49.633347  293728 cri.go:89] found id: ""
	I1206 10:11:49.633414  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.633429  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:49.633436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:49.633495  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:49.658195  293728 cri.go:89] found id: ""
	I1206 10:11:49.658223  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.658233  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:49.658240  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:49.658297  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:49.691086  293728 cri.go:89] found id: ""
	I1206 10:11:49.691112  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.691122  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:49.691128  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:49.691213  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:49.716625  293728 cri.go:89] found id: ""
	I1206 10:11:49.716652  293728 logs.go:282] 0 containers: []
	W1206 10:11:49.716661  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:49.716669  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:49.716684  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:49.778048  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:49.778093  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:49.792187  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:49.792216  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:49.858528  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:49.849703   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.850362   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852120   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852678   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.854314   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:49.849703   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.850362   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852120   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.852678   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:49.854314   11698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:49.858551  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:49.858566  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:49.884659  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:49.884691  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:52.413397  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:52.424250  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:52.424322  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:52.454481  293728 cri.go:89] found id: ""
	I1206 10:11:52.454557  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.454573  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:52.454581  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:52.454642  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:52.487281  293728 cri.go:89] found id: ""
	I1206 10:11:52.487315  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.487325  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:52.487331  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:52.487408  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:52.522975  293728 cri.go:89] found id: ""
	I1206 10:11:52.523008  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.523025  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:52.523032  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:52.523102  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:52.557389  293728 cri.go:89] found id: ""
	I1206 10:11:52.557421  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.557430  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:52.557436  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:52.557494  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:52.583449  293728 cri.go:89] found id: ""
	I1206 10:11:52.583474  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.583483  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:52.583490  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:52.583608  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:52.608370  293728 cri.go:89] found id: ""
	I1206 10:11:52.608412  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.608422  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:52.608429  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:52.608499  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:52.637950  293728 cri.go:89] found id: ""
	I1206 10:11:52.638026  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.638051  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:52.638069  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:52.638160  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:52.663271  293728 cri.go:89] found id: ""
	I1206 10:11:52.663349  293728 logs.go:282] 0 containers: []
	W1206 10:11:52.663413  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:52.663443  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:52.663464  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:52.721303  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:52.721339  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:52.735517  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:52.735548  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:52.806629  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:52.798101   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.799086   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800264   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800722   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.802387   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:52.798101   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.799086   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800264   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.800722   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:52.802387   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:52.806652  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:52.806666  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:52.834909  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:52.834944  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:55.365104  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:55.376039  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:55.376112  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:55.401088  293728 cri.go:89] found id: ""
	I1206 10:11:55.401114  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.401123  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:55.401130  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:55.401187  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:55.426712  293728 cri.go:89] found id: ""
	I1206 10:11:55.426735  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.426744  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:55.426752  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:55.426808  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:55.453355  293728 cri.go:89] found id: ""
	I1206 10:11:55.453433  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.453449  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:55.453456  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:55.453524  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:55.482694  293728 cri.go:89] found id: ""
	I1206 10:11:55.482786  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.482809  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:55.482831  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:55.482965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:55.517524  293728 cri.go:89] found id: ""
	I1206 10:11:55.517567  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.517576  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:55.517582  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:55.517651  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:55.552808  293728 cri.go:89] found id: ""
	I1206 10:11:55.552887  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.552919  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:55.552943  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:55.553051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:55.582318  293728 cri.go:89] found id: ""
	I1206 10:11:55.582391  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.582413  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:55.582435  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:55.582545  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:55.611979  293728 cri.go:89] found id: ""
	I1206 10:11:55.612012  293728 logs.go:282] 0 containers: []
	W1206 10:11:55.612021  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:55.612030  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:55.612043  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:55.641663  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:55.641691  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:11:55.699247  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:55.699281  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:55.714284  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:55.714312  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:55.779980  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:55.771718   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.772511   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774153   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774506   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.776084   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:55.771718   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.772511   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774153   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.774506   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:55.776084   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:55.780002  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:55.780020  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:58.307533  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:11:58.318444  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:11:58.318517  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:11:58.346128  293728 cri.go:89] found id: ""
	I1206 10:11:58.346181  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.346194  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:11:58.346202  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:11:58.346276  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:11:58.370957  293728 cri.go:89] found id: ""
	I1206 10:11:58.370992  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.371001  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:11:58.371013  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:11:58.371093  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:11:58.397685  293728 cri.go:89] found id: ""
	I1206 10:11:58.397717  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.397726  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:11:58.397732  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:11:58.397803  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:11:58.426933  293728 cri.go:89] found id: ""
	I1206 10:11:58.426959  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.426967  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:11:58.426973  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:11:58.427051  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:11:58.456330  293728 cri.go:89] found id: ""
	I1206 10:11:58.456365  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.456375  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:11:58.456381  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:11:58.456448  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:11:58.494975  293728 cri.go:89] found id: ""
	I1206 10:11:58.495018  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.495027  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:11:58.495034  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:11:58.495106  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:11:58.532346  293728 cri.go:89] found id: ""
	I1206 10:11:58.532379  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.532389  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:11:58.532395  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:11:58.532465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:11:58.558540  293728 cri.go:89] found id: ""
	I1206 10:11:58.558576  293728 logs.go:282] 0 containers: []
	W1206 10:11:58.558584  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:11:58.558593  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:11:58.558605  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:11:58.573220  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:11:58.573249  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:11:58.639437  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:11:58.631044   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.631569   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633054   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633435   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.634868   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:11:58.631044   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.631569   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633054   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.633435   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:11:58.634868   12034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:11:58.639512  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:11:58.639535  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:11:58.664823  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:11:58.664861  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:11:58.692934  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:11:58.692966  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:01.250858  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:01.262935  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:01.263112  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:01.291081  293728 cri.go:89] found id: ""
	I1206 10:12:01.291107  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.291117  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:01.291123  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:01.291204  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:01.318105  293728 cri.go:89] found id: ""
	I1206 10:12:01.318138  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.318147  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:01.318168  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:01.318249  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:01.344419  293728 cri.go:89] found id: ""
	I1206 10:12:01.344488  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.344514  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:01.344528  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:01.344601  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:01.370652  293728 cri.go:89] found id: ""
	I1206 10:12:01.370677  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.370686  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:01.370693  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:01.370751  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:01.397501  293728 cri.go:89] found id: ""
	I1206 10:12:01.397528  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.397538  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:01.397544  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:01.397603  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:01.423444  293728 cri.go:89] found id: ""
	I1206 10:12:01.423517  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.423541  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:01.423563  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:01.423646  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:01.453268  293728 cri.go:89] found id: ""
	I1206 10:12:01.453294  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.453303  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:01.453316  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:01.453417  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:01.481810  293728 cri.go:89] found id: ""
	I1206 10:12:01.481890  293728 logs.go:282] 0 containers: []
	W1206 10:12:01.481915  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:01.481932  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:01.481959  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:01.538994  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:01.539079  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:01.553293  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:01.553320  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:01.623989  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:01.612749   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.615513   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.616460   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618024   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618347   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:01.612749   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.615513   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.616460   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618024   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:01.618347   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:01.624063  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:01.624085  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:01.649724  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:01.649757  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:04.179886  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:04.191201  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:04.191273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:04.216964  293728 cri.go:89] found id: ""
	I1206 10:12:04.217045  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.217065  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:04.217072  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:04.217168  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:04.252840  293728 cri.go:89] found id: ""
	I1206 10:12:04.252875  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.252884  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:04.252891  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:04.252965  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:04.281583  293728 cri.go:89] found id: ""
	I1206 10:12:04.281614  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.281623  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:04.281629  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:04.281695  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:04.311479  293728 cri.go:89] found id: ""
	I1206 10:12:04.311547  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.311571  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:04.311585  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:04.311658  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:04.337184  293728 cri.go:89] found id: ""
	I1206 10:12:04.337213  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.337221  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:04.337228  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:04.337307  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:04.363672  293728 cri.go:89] found id: ""
	I1206 10:12:04.363705  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.363715  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:04.363738  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:04.363836  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:04.394214  293728 cri.go:89] found id: ""
	I1206 10:12:04.394240  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.394249  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:04.394256  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:04.394367  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:04.419254  293728 cri.go:89] found id: ""
	I1206 10:12:04.419335  293728 logs.go:282] 0 containers: []
	W1206 10:12:04.419359  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:04.419403  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:04.419437  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:04.451555  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:04.451582  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:04.509304  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:04.509336  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:04.523821  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:04.523848  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:04.591566  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:04.581768   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.583295   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.584171   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.585971   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.586453   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:04.581768   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.583295   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.584171   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.585971   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:04.586453   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:04.591591  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:04.591604  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:07.121570  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:07.132505  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:07.132585  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:07.157021  293728 cri.go:89] found id: ""
	I1206 10:12:07.157047  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.157056  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:07.157063  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:07.157151  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:07.182478  293728 cri.go:89] found id: ""
	I1206 10:12:07.182510  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.182519  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:07.182526  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:07.182597  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:07.212401  293728 cri.go:89] found id: ""
	I1206 10:12:07.212424  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.212433  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:07.212439  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:07.212498  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:07.246228  293728 cri.go:89] found id: ""
	I1206 10:12:07.246255  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.246264  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:07.246271  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:07.246333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:07.273777  293728 cri.go:89] found id: ""
	I1206 10:12:07.273802  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.273811  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:07.273817  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:07.273878  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:07.302425  293728 cri.go:89] found id: ""
	I1206 10:12:07.302464  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.302473  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:07.302481  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:07.302556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:07.328379  293728 cri.go:89] found id: ""
	I1206 10:12:07.328403  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.328412  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:07.328418  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:07.328476  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:07.358727  293728 cri.go:89] found id: ""
	I1206 10:12:07.358751  293728 logs.go:282] 0 containers: []
	W1206 10:12:07.358760  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:07.358771  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:07.358811  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:07.415522  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:07.415561  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:07.429309  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:07.429338  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:07.497723  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:07.488450   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.488945   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.490709   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.491285   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.492907   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:07.488450   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.488945   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.490709   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.491285   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:07.492907   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:07.497749  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:07.497762  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:07.524612  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:07.524648  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:10.055528  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:10.066871  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:10.066968  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:10.092582  293728 cri.go:89] found id: ""
	I1206 10:12:10.092611  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.092622  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:10.092630  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:10.092695  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:10.120230  293728 cri.go:89] found id: ""
	I1206 10:12:10.120321  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.120347  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:10.120366  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:10.120465  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:10.146387  293728 cri.go:89] found id: ""
	I1206 10:12:10.146464  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.146489  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:10.146508  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:10.146582  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:10.173457  293728 cri.go:89] found id: ""
	I1206 10:12:10.173484  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.173493  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:10.173500  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:10.173592  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:10.202187  293728 cri.go:89] found id: ""
	I1206 10:12:10.202262  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.202285  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:10.202303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:10.202393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:10.232838  293728 cri.go:89] found id: ""
	I1206 10:12:10.232901  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.232922  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:10.232940  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:10.233025  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:10.267445  293728 cri.go:89] found id: ""
	I1206 10:12:10.267520  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.267543  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:10.267561  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:10.267650  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:10.298314  293728 cri.go:89] found id: ""
	I1206 10:12:10.298389  293728 logs.go:282] 0 containers: []
	W1206 10:12:10.298412  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:10.298434  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:10.298472  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:10.325341  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:10.325374  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:10.385049  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:10.385081  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:10.398513  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:10.398540  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:10.463844  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:10.454441   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.455251   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457119   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457874   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.459632   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:10.454441   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.455251   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457119   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.457874   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:10.459632   12500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:10.463908  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:10.463945  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:12.991294  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:13.006571  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:13.006645  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:13.040431  293728 cri.go:89] found id: ""
	I1206 10:12:13.040457  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.040466  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:13.040479  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:13.040544  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:13.066025  293728 cri.go:89] found id: ""
	I1206 10:12:13.066047  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.066056  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:13.066062  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:13.066134  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:13.093459  293728 cri.go:89] found id: ""
	I1206 10:12:13.093482  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.093491  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:13.093496  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:13.093556  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:13.118066  293728 cri.go:89] found id: ""
	I1206 10:12:13.118089  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.118098  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:13.118104  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:13.118162  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:13.145619  293728 cri.go:89] found id: ""
	I1206 10:12:13.145685  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.145704  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:13.145711  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:13.145770  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:13.174833  293728 cri.go:89] found id: ""
	I1206 10:12:13.174857  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.174866  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:13.174872  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:13.174934  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:13.200490  293728 cri.go:89] found id: ""
	I1206 10:12:13.200517  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.200526  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:13.200532  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:13.200590  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:13.243683  293728 cri.go:89] found id: ""
	I1206 10:12:13.243709  293728 logs.go:282] 0 containers: []
	W1206 10:12:13.243718  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:13.243726  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:13.243741  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:13.279303  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:13.279330  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:13.337861  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:13.337897  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:13.351559  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:13.351634  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:13.413990  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:13.406460   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.406956   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408410   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408802   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.410225   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:13.406460   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.406956   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408410   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.408802   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:13.410225   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:13.414012  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:13.414028  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:15.940438  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:15.952379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:15.952452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:15.997713  293728 cri.go:89] found id: ""
	I1206 10:12:15.997741  293728 logs.go:282] 0 containers: []
	W1206 10:12:15.997749  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:15.997755  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:15.997814  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:16.027447  293728 cri.go:89] found id: ""
	I1206 10:12:16.027477  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.027486  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:16.027494  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:16.027552  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:16.056201  293728 cri.go:89] found id: ""
	I1206 10:12:16.056224  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.056232  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:16.056238  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:16.056296  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:16.080619  293728 cri.go:89] found id: ""
	I1206 10:12:16.080641  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.080650  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:16.080657  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:16.080736  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:16.106294  293728 cri.go:89] found id: ""
	I1206 10:12:16.106316  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.106324  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:16.106330  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:16.106393  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:16.131999  293728 cri.go:89] found id: ""
	I1206 10:12:16.132026  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.132036  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:16.132042  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:16.132103  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:16.156693  293728 cri.go:89] found id: ""
	I1206 10:12:16.156719  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.156734  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:16.156740  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:16.156819  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:16.182391  293728 cri.go:89] found id: ""
	I1206 10:12:16.182416  293728 logs.go:282] 0 containers: []
	W1206 10:12:16.182426  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:16.182436  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:16.182467  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:16.262961  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:16.251126   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.252302   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.253220   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257326   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257858   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:16.251126   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.252302   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.253220   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257326   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:16.257858   12705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:16.262991  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:16.263024  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:16.292146  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:16.292180  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:16.323803  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:16.323830  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:16.382496  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:16.382530  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:18.896413  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:18.906898  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:18.907007  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:18.930731  293728 cri.go:89] found id: ""
	I1206 10:12:18.930763  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.930773  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:18.930779  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:18.930844  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:18.955309  293728 cri.go:89] found id: ""
	I1206 10:12:18.955334  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.955343  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:18.955349  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:18.955428  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:18.987453  293728 cri.go:89] found id: ""
	I1206 10:12:18.987480  293728 logs.go:282] 0 containers: []
	W1206 10:12:18.987489  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:18.987495  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:18.987559  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:19.016315  293728 cri.go:89] found id: ""
	I1206 10:12:19.016359  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.016369  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:19.016376  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:19.016457  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:19.046838  293728 cri.go:89] found id: ""
	I1206 10:12:19.046914  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.046939  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:19.046958  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:19.047088  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:19.076303  293728 cri.go:89] found id: ""
	I1206 10:12:19.076339  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.076348  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:19.076355  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:19.076424  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:19.100478  293728 cri.go:89] found id: ""
	I1206 10:12:19.100505  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.100514  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:19.100520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:19.100600  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:19.125238  293728 cri.go:89] found id: ""
	I1206 10:12:19.125303  293728 logs.go:282] 0 containers: []
	W1206 10:12:19.125317  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:19.125327  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:19.125338  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:19.181824  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:19.181858  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:19.195937  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:19.195963  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:19.288898  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:19.278661   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.279479   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.281324   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.282074   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.284115   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:19.278661   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.279479   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.281324   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.282074   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:19.284115   12827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:19.288922  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:19.288935  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:19.314454  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:19.314487  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:21.845581  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:21.856143  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:21.856207  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:21.880174  293728 cri.go:89] found id: ""
	I1206 10:12:21.880197  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.880206  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:21.880212  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:21.880273  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:21.906162  293728 cri.go:89] found id: ""
	I1206 10:12:21.906195  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.906204  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:21.906209  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:21.906277  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:21.929912  293728 cri.go:89] found id: ""
	I1206 10:12:21.929936  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.929945  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:21.929951  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:21.930017  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:21.955257  293728 cri.go:89] found id: ""
	I1206 10:12:21.955288  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.955297  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:21.955303  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:21.955403  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:21.988656  293728 cri.go:89] found id: ""
	I1206 10:12:21.988682  293728 logs.go:282] 0 containers: []
	W1206 10:12:21.988691  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:21.988698  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:21.988766  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:22.026205  293728 cri.go:89] found id: ""
	I1206 10:12:22.026232  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.026241  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:22.026248  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:22.026321  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:22.056883  293728 cri.go:89] found id: ""
	I1206 10:12:22.056906  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.056915  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:22.056923  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:22.056983  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:22.087245  293728 cri.go:89] found id: ""
	I1206 10:12:22.087269  293728 logs.go:282] 0 containers: []
	W1206 10:12:22.087277  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:22.087286  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:22.087296  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:22.148181  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:22.148213  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:22.161924  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:22.161952  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:22.238449  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:22.229386   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.230236   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.231860   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.232500   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.234010   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:22.229386   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.230236   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.231860   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.232500   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:22.234010   12941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:22.238523  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:22.238550  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:22.268691  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:22.268765  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:24.800715  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:24.811471  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:24.811557  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:24.836234  293728 cri.go:89] found id: ""
	I1206 10:12:24.836261  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.836270  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:24.836277  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:24.836335  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:24.861915  293728 cri.go:89] found id: ""
	I1206 10:12:24.861942  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.861951  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:24.861957  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:24.862015  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:24.886931  293728 cri.go:89] found id: ""
	I1206 10:12:24.886958  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.886968  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:24.886974  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:24.887058  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:24.913606  293728 cri.go:89] found id: ""
	I1206 10:12:24.913633  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.913642  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:24.913649  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:24.913708  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:24.942656  293728 cri.go:89] found id: ""
	I1206 10:12:24.942690  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.942699  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:24.942706  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:24.942772  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:24.973528  293728 cri.go:89] found id: ""
	I1206 10:12:24.973563  293728 logs.go:282] 0 containers: []
	W1206 10:12:24.973572  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:24.973579  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:24.973654  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:25.011969  293728 cri.go:89] found id: ""
	I1206 10:12:25.012007  293728 logs.go:282] 0 containers: []
	W1206 10:12:25.012017  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:25.012024  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:25.012105  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:25.041306  293728 cri.go:89] found id: ""
	I1206 10:12:25.041340  293728 logs.go:282] 0 containers: []
	W1206 10:12:25.041349  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:25.041363  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:25.041377  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:25.068464  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:25.068503  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:25.098409  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:25.098436  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:25.156122  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:25.156158  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:25.170373  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:25.170405  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:25.248624  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:25.240035   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.240794   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.242472   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.243030   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.244596   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:25.240035   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.240794   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.242472   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.243030   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:25.244596   13067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:27.748906  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:27.759522  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:27.759591  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:27.785227  293728 cri.go:89] found id: ""
	I1206 10:12:27.785250  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.785258  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:27.785264  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:27.785319  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:27.810979  293728 cri.go:89] found id: ""
	I1206 10:12:27.811011  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.811021  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:27.811028  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:27.811085  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:27.837232  293728 cri.go:89] found id: ""
	I1206 10:12:27.837298  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.837313  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:27.837320  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:27.837376  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:27.861601  293728 cri.go:89] found id: ""
	I1206 10:12:27.861625  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.861634  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:27.861641  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:27.861699  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:27.886862  293728 cri.go:89] found id: ""
	I1206 10:12:27.886887  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.886897  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:27.886903  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:27.886960  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:27.911189  293728 cri.go:89] found id: ""
	I1206 10:12:27.911213  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.911222  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:27.911229  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:27.911285  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:27.935326  293728 cri.go:89] found id: ""
	I1206 10:12:27.935352  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.935361  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:27.935368  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:27.935452  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:27.959524  293728 cri.go:89] found id: ""
	I1206 10:12:27.959545  293728 logs.go:282] 0 containers: []
	W1206 10:12:27.959555  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:27.959564  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:27.959575  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:28.028099  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:28.028143  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:28.048460  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:28.048488  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:28.118674  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:28.109022   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.109888   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.111697   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.112355   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.114062   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:28.109022   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.109888   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.111697   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.112355   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:28.114062   13168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:28.118697  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:28.118709  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:28.144591  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:28.144630  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:30.673088  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:30.683869  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:12:30.683949  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:12:30.708341  293728 cri.go:89] found id: ""
	I1206 10:12:30.708364  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.708372  293728 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:12:30.708379  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:12:30.708434  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:12:30.734236  293728 cri.go:89] found id: ""
	I1206 10:12:30.734261  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.734270  293728 logs.go:284] No container was found matching "etcd"
	I1206 10:12:30.734276  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:12:30.734333  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:12:30.760476  293728 cri.go:89] found id: ""
	I1206 10:12:30.760499  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.760508  293728 logs.go:284] No container was found matching "coredns"
	I1206 10:12:30.760520  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:12:30.760580  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:12:30.785771  293728 cri.go:89] found id: ""
	I1206 10:12:30.785793  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.785802  293728 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:12:30.785808  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:12:30.785871  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:12:30.814408  293728 cri.go:89] found id: ""
	I1206 10:12:30.814431  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.814439  293728 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:12:30.814445  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:12:30.814504  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:12:30.840084  293728 cri.go:89] found id: ""
	I1206 10:12:30.840108  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.840117  293728 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:12:30.840124  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:12:30.840183  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:12:30.865698  293728 cri.go:89] found id: ""
	I1206 10:12:30.865723  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.865732  293728 logs.go:284] No container was found matching "kindnet"
	I1206 10:12:30.865745  293728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 10:12:30.865807  293728 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 10:12:30.895469  293728 cri.go:89] found id: ""
	I1206 10:12:30.895538  293728 logs.go:282] 0 containers: []
	W1206 10:12:30.895553  293728 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 10:12:30.895562  293728 logs.go:123] Gathering logs for kubelet ...
	I1206 10:12:30.895573  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:12:30.952609  293728 logs.go:123] Gathering logs for dmesg ...
	I1206 10:12:30.952644  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:12:30.966729  293728 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:12:30.966758  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:12:31.059967  293728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:31.049168   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.050975   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.051825   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.053830   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.054324   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:12:31.049168   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.050975   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.051825   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.053830   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:31.054324   13283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:12:31.059992  293728 logs.go:123] Gathering logs for containerd ...
	I1206 10:12:31.060006  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:12:31.087739  293728 logs.go:123] Gathering logs for container status ...
	I1206 10:12:31.087785  293728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:12:33.618907  293728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:12:33.633558  293728 out.go:203] 
	W1206 10:12:33.636407  293728 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1206 10:12:33.636439  293728 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1206 10:12:33.636448  293728 out.go:285] * Related issues:
	W1206 10:12:33.636468  293728 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1206 10:12:33.636488  293728 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1206 10:12:33.640150  293728 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.180738951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.180841500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181051963Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181150721Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181229926Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181300302Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181371777Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181434227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181504595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.181620526Z" level=info msg="Connect containerd service"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.182068703Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.183078485Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193499317Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193692279Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193840088Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.193788608Z" level=info msg="Start recovering state"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235102688Z" level=info msg="Start event monitor"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235301393Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235445231Z" level=info msg="Start streaming server"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235540452Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235788569Z" level=info msg="runtime interface starting up..."
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235878794Z" level=info msg="starting plugins..."
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.235966762Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:06:31 newest-cni-387337 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:06:31 newest-cni-387337 containerd[555]: time="2025-12-06T10:06:31.238179492Z" level=info msg="containerd successfully booted in 0.085161s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:12:46.946494   13947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:46.947056   13947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:46.948595   13947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:46.948935   13947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:12:46.950411   13947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:12:46 up  1:55,  0 user,  load average: 1.20, 0.78, 1.30
	Linux newest-cni-387337 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:12:43 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:44 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 06 10:12:44 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:44 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:44 newest-cni-387337 kubelet[13810]: E1206 10:12:44.600187   13810 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:44 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:44 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:45 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 06 10:12:45 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:45 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:45 newest-cni-387337 kubelet[13831]: E1206 10:12:45.351889   13831 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:45 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:45 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:45 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 06 10:12:45 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:45 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:46 newest-cni-387337 kubelet[13850]: E1206 10:12:46.067793   13850 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:46 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:46 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:12:46 newest-cni-387337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 06 10:12:46 newest-cni-387337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:46 newest-cni-387337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:12:46 newest-cni-387337 kubelet[13916]: E1206 10:12:46.800644   13916 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:12:46 newest-cni-387337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:12:46 newest-cni-387337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-387337 -n newest-cni-387337: exit status 2 (371.899635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-387337" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (263.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:19:11.021114    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:11.027547    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:11.038953    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:11.060475    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:11.101950    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:11.183630    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:11.345245    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:11.666953    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:19:12.308406    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:19:13.589824    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:19:16.151747    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1206 10:19:17.902379    4292 config.go:182] Loaded profile config "custom-flannel-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:19:21.273431    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:19:31.515491    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:19:42.815122    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:19:51.996820    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:20:32.959192    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:20:55.755213    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:21:01.906207    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:21:01.912567    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:21:01.923926    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:21:01.945308    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:21:01.986864    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:21:02.069067    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:21:02.230536    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:21:02.555951    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:21:04.479827    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:21:07.041778    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:21:09.863253    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/old-k8s-version-587884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:21:12.163585    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:21:22.405691    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:21:36.062022    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:21:42.887978    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:21:54.881030    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/auto-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1206 10:22:13.963357    4292 config.go:182] Loaded profile config "flannel-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 10:22:23.850243    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359: exit status 2 (316.343118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-257359" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-257359 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-257359 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.444µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-257359 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-257359
helpers_test.go:243: (dbg) docker inspect no-preload-257359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	        "Created": "2025-12-06T09:52:27.333376101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:02:50.853067046Z",
	            "FinishedAt": "2025-12-06T10:02:49.497503356Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hostname",
	        "HostsPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/hosts",
	        "LogPath": "/var/lib/docker/containers/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26/76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26-json.log",
	        "Name": "/no-preload-257359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-257359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-257359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76494ba86a4019c5b30e85f46fc5b7153d0b7f8e73320fe4012b58522dffbe26",
	                "LowerDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a89d34a3d8842872a51289e3a5eba31c29367f940e480f24f3a3ec8cf3ad9613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-257359",
	                "Source": "/var/lib/docker/volumes/no-preload-257359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-257359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-257359",
	                "name.minikube.sigs.k8s.io": "no-preload-257359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "263a8cb62ad65d73ef315ff544437f3a15543e9da8e511558b3504b20118eae7",
	            "SandboxKey": "/var/run/docker/netns/263a8cb62ad6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-257359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:cd:c5:1d:17:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b05bfbfa55363c82b2c20e75689dc6d905b9177d9ed6efb1bc4c663e65903cf4",
	                    "EndpointID": "fe68f03ea36cc45569898aaadfae8dde5a2342dd57895d5970718f4ce7302e58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-257359",
	                        "76494ba86a40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359: exit status 2 (340.427071ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-257359 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-793086 sudo systemctl cat kubelet --no-pager                                                                             │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo journalctl -xeu kubelet --all --full --no-pager                                                              │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo cat /etc/kubernetes/kubelet.conf                                                                             │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo cat /var/lib/kubelet/config.yaml                                                                             │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo systemctl status docker --all --full --no-pager                                                              │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh     │ -p enable-default-cni-793086 sudo systemctl cat docker --no-pager                                                                              │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo cat /etc/docker/daemon.json                                                                                  │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh     │ -p enable-default-cni-793086 sudo docker system info                                                                                           │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh     │ -p enable-default-cni-793086 sudo systemctl status cri-docker --all --full --no-pager                                                          │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh     │ -p enable-default-cni-793086 sudo systemctl cat cri-docker --no-pager                                                                          │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                     │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │                     │
	│ ssh     │ -p enable-default-cni-793086 sudo cat /usr/lib/systemd/system/cri-docker.service                                                               │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo cri-dockerd --version                                                                                        │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:20 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo systemctl status containerd --all --full --no-pager                                                          │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:20 UTC │ 06 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo systemctl cat containerd --no-pager                                                                          │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo cat /lib/systemd/system/containerd.service                                                                   │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo cat /etc/containerd/config.toml                                                                              │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo containerd config dump                                                                                       │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo systemctl status crio --all --full --no-pager                                                                │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │                     │
	│ ssh     │ -p enable-default-cni-793086 sudo systemctl cat crio --no-pager                                                                                │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                      │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-793086 sudo crio config                                                                                                  │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:21 UTC │
	│ delete  │ -p enable-default-cni-793086                                                                                                                   │ enable-default-cni-793086 │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:21 UTC │
	│ start   │ -p flannel-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd │ flannel-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh     │ -p flannel-793086 pgrep -a kubelet                                                                                                             │ flannel-793086            │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:21:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:21:05.791695  347984 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:21:05.791831  347984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:21:05.791842  347984 out.go:374] Setting ErrFile to fd 2...
	I1206 10:21:05.791848  347984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:21:05.792109  347984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 10:21:05.792508  347984 out.go:368] Setting JSON to false
	I1206 10:21:05.793329  347984 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7417,"bootTime":1765009049,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:21:05.793402  347984 start.go:143] virtualization:  
	I1206 10:21:05.797656  347984 out.go:179] * [flannel-793086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:21:05.802197  347984 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 10:21:05.802392  347984 notify.go:221] Checking for updates...
	I1206 10:21:05.809040  347984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:21:05.812182  347984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:21:05.815312  347984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 10:21:05.818366  347984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:21:05.821370  347984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:21:05.824906  347984 config.go:182] Loaded profile config "no-preload-257359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:21:05.825002  347984 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:21:05.858775  347984 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:21:05.858901  347984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:21:05.918140  347984 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:21:05.908038494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:21:05.918249  347984 docker.go:319] overlay module found
	I1206 10:21:05.921661  347984 out.go:179] * Using the docker driver based on user configuration
	I1206 10:21:05.924758  347984 start.go:309] selected driver: docker
	I1206 10:21:05.924780  347984 start.go:927] validating driver "docker" against <nil>
	I1206 10:21:05.924793  347984 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:21:05.925525  347984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:21:05.993020  347984 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:21:05.984163832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:21:05.993181  347984 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:21:05.993399  347984 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:21:05.996586  347984 out.go:179] * Using Docker driver with root privileges
	I1206 10:21:05.999631  347984 cni.go:84] Creating CNI manager for "flannel"
	I1206 10:21:05.999672  347984 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1206 10:21:05.999782  347984 start.go:353] cluster config:
	{Name:flannel-793086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-793086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:21:06.009006  347984 out.go:179] * Starting "flannel-793086" primary control-plane node in "flannel-793086" cluster
	I1206 10:21:06.012029  347984 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:21:06.015185  347984 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:21:06.018172  347984 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 10:21:06.018216  347984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:21:06.018226  347984 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1206 10:21:06.018238  347984 cache.go:65] Caching tarball of preloaded images
	I1206 10:21:06.018331  347984 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:21:06.018342  347984 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 10:21:06.018449  347984 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/config.json ...
	I1206 10:21:06.018476  347984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/config.json: {Name:mk2872cab1179f206dfe33c15c22432d2d67dc82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:06.038772  347984 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:21:06.038795  347984 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:21:06.038816  347984 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:21:06.038845  347984 start.go:360] acquireMachinesLock for flannel-793086: {Name:mkb668daf6c1867fec3a0abd534faa1aadefd92a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:21:06.038953  347984 start.go:364] duration metric: took 88.986µs to acquireMachinesLock for "flannel-793086"
	I1206 10:21:06.038984  347984 start.go:93] Provisioning new machine with config: &{Name:flannel-793086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-793086 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:21:06.039064  347984 start.go:125] createHost starting for "" (driver="docker")
	I1206 10:21:06.042604  347984 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 10:21:06.042871  347984 start.go:159] libmachine.API.Create for "flannel-793086" (driver="docker")
	I1206 10:21:06.042916  347984 client.go:173] LocalClient.Create starting
	I1206 10:21:06.043010  347984 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
	I1206 10:21:06.043045  347984 main.go:143] libmachine: Decoding PEM data...
	I1206 10:21:06.043061  347984 main.go:143] libmachine: Parsing certificate...
	I1206 10:21:06.043128  347984 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
	I1206 10:21:06.043149  347984 main.go:143] libmachine: Decoding PEM data...
	I1206 10:21:06.043160  347984 main.go:143] libmachine: Parsing certificate...
	I1206 10:21:06.043596  347984 cli_runner.go:164] Run: docker network inspect flannel-793086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 10:21:06.060078  347984 cli_runner.go:211] docker network inspect flannel-793086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 10:21:06.060165  347984 network_create.go:284] running [docker network inspect flannel-793086] to gather additional debugging logs...
	I1206 10:21:06.060183  347984 cli_runner.go:164] Run: docker network inspect flannel-793086
	W1206 10:21:06.075365  347984 cli_runner.go:211] docker network inspect flannel-793086 returned with exit code 1
	I1206 10:21:06.075437  347984 network_create.go:287] error running [docker network inspect flannel-793086]: docker network inspect flannel-793086: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-793086 not found
	I1206 10:21:06.075451  347984 network_create.go:289] output of [docker network inspect flannel-793086]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-793086 not found
	
	** /stderr **
	I1206 10:21:06.075546  347984 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:21:06.093035  347984 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
	I1206 10:21:06.093471  347984 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6479799cc46a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:b3:f8:bd:10:a1} reservation:<nil>}
	I1206 10:21:06.093929  347984 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-045bb1cdddf9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:c6:f0:a4:f5:8d} reservation:<nil>}
	I1206 10:21:06.094264  347984 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b05bfbfa5536 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:01:4f:ea:ac:91} reservation:<nil>}
	I1206 10:21:06.094785  347984 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001acc420}
	I1206 10:21:06.094810  347984 network_create.go:124] attempt to create docker network flannel-793086 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 10:21:06.094875  347984 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-793086 flannel-793086
	I1206 10:21:06.156515  347984 network_create.go:108] docker network flannel-793086 192.168.85.0/24 created
	I1206 10:21:06.156551  347984 kic.go:121] calculated static IP "192.168.85.2" for the "flannel-793086" container
	I1206 10:21:06.156622  347984 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 10:21:06.174713  347984 cli_runner.go:164] Run: docker volume create flannel-793086 --label name.minikube.sigs.k8s.io=flannel-793086 --label created_by.minikube.sigs.k8s.io=true
	I1206 10:21:06.193418  347984 oci.go:103] Successfully created a docker volume flannel-793086
	I1206 10:21:06.193523  347984 cli_runner.go:164] Run: docker run --rm --name flannel-793086-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-793086 --entrypoint /usr/bin/test -v flannel-793086:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 10:21:06.750521  347984 oci.go:107] Successfully prepared a docker volume flannel-793086
	I1206 10:21:06.750593  347984 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 10:21:06.750606  347984 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 10:21:06.750676  347984 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v flannel-793086:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 10:21:10.779516  347984 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v flannel-793086:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.028778839s)
	I1206 10:21:10.779601  347984 kic.go:203] duration metric: took 4.028945117s to extract preloaded images to volume ...
	W1206 10:21:10.779804  347984 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 10:21:10.779916  347984 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 10:21:10.835717  347984 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-793086 --name flannel-793086 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-793086 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-793086 --network flannel-793086 --ip 192.168.85.2 --volume flannel-793086:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 10:21:11.189255  347984 cli_runner.go:164] Run: docker container inspect flannel-793086 --format={{.State.Running}}
	I1206 10:21:11.210460  347984 cli_runner.go:164] Run: docker container inspect flannel-793086 --format={{.State.Status}}
	I1206 10:21:11.234440  347984 cli_runner.go:164] Run: docker exec flannel-793086 stat /var/lib/dpkg/alternatives/iptables
	I1206 10:21:11.293249  347984 oci.go:144] the created container "flannel-793086" has a running status.
	I1206 10:21:11.293285  347984 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/flannel-793086/id_rsa...
	I1206 10:21:11.655901  347984 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/flannel-793086/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 10:21:11.681061  347984 cli_runner.go:164] Run: docker container inspect flannel-793086 --format={{.State.Status}}
	I1206 10:21:11.705264  347984 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 10:21:11.705290  347984 kic_runner.go:114] Args: [docker exec --privileged flannel-793086 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 10:21:11.772213  347984 cli_runner.go:164] Run: docker container inspect flannel-793086 --format={{.State.Status}}
	I1206 10:21:11.801665  347984 machine.go:94] provisionDockerMachine start ...
	I1206 10:21:11.801769  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:11.825581  347984 main.go:143] libmachine: Using SSH client type: native
	I1206 10:21:11.825903  347984 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1206 10:21:11.825919  347984 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:21:11.826542  347984 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48714->127.0.0.1:33133: read: connection reset by peer
	I1206 10:21:14.983428  347984 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-793086
	
	I1206 10:21:14.983451  347984 ubuntu.go:182] provisioning hostname "flannel-793086"
	I1206 10:21:14.983514  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:15.017212  347984 main.go:143] libmachine: Using SSH client type: native
	I1206 10:21:15.017535  347984 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1206 10:21:15.017546  347984 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-793086 && echo "flannel-793086" | sudo tee /etc/hostname
	I1206 10:21:15.190811  347984 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-793086
	
	I1206 10:21:15.191055  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:15.212413  347984 main.go:143] libmachine: Using SSH client type: native
	I1206 10:21:15.212731  347984 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1206 10:21:15.212747  347984 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-793086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-793086/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-793086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:21:15.375978  347984 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:21:15.376079  347984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
	I1206 10:21:15.376132  347984 ubuntu.go:190] setting up certificates
	I1206 10:21:15.376160  347984 provision.go:84] configureAuth start
	I1206 10:21:15.376245  347984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-793086
	I1206 10:21:15.397234  347984 provision.go:143] copyHostCerts
	I1206 10:21:15.397301  347984 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
	I1206 10:21:15.397310  347984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
	I1206 10:21:15.397386  347984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
	I1206 10:21:15.397488  347984 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
	I1206 10:21:15.397494  347984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
	I1206 10:21:15.397520  347984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
	I1206 10:21:15.397572  347984 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
	I1206 10:21:15.397577  347984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
	I1206 10:21:15.397614  347984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
	I1206 10:21:15.397666  347984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.flannel-793086 san=[127.0.0.1 192.168.85.2 flannel-793086 localhost minikube]
	I1206 10:21:15.547878  347984 provision.go:177] copyRemoteCerts
	I1206 10:21:15.547951  347984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:21:15.548020  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:15.566847  347984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/flannel-793086/id_rsa Username:docker}
	I1206 10:21:15.671715  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:21:15.689907  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 10:21:15.708370  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1206 10:21:15.726012  347984 provision.go:87] duration metric: took 349.816682ms to configureAuth
	I1206 10:21:15.726036  347984 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:21:15.726228  347984 config.go:182] Loaded profile config "flannel-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:21:15.726236  347984 machine.go:97] duration metric: took 3.924549379s to provisionDockerMachine
	I1206 10:21:15.726243  347984 client.go:176] duration metric: took 9.683320643s to LocalClient.Create
	I1206 10:21:15.726266  347984 start.go:167] duration metric: took 9.68339641s to libmachine.API.Create "flannel-793086"
	I1206 10:21:15.726275  347984 start.go:293] postStartSetup for "flannel-793086" (driver="docker")
	I1206 10:21:15.726283  347984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:21:15.726332  347984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:21:15.726370  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:15.744349  347984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/flannel-793086/id_rsa Username:docker}
	I1206 10:21:15.851357  347984 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:21:15.854827  347984 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:21:15.854856  347984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:21:15.854868  347984 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
	I1206 10:21:15.854926  347984 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
	I1206 10:21:15.855008  347984 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
	I1206 10:21:15.855116  347984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:21:15.862763  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:21:15.880640  347984 start.go:296] duration metric: took 154.349731ms for postStartSetup
	I1206 10:21:15.881016  347984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-793086
	I1206 10:21:15.898635  347984 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/config.json ...
	I1206 10:21:15.898930  347984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:21:15.898987  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:15.916606  347984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/flannel-793086/id_rsa Username:docker}
	I1206 10:21:16.024435  347984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:21:16.030006  347984 start.go:128] duration metric: took 9.990926975s to createHost
	I1206 10:21:16.030073  347984 start.go:83] releasing machines lock for "flannel-793086", held for 9.991105308s
	I1206 10:21:16.030156  347984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-793086
	I1206 10:21:16.048343  347984 ssh_runner.go:195] Run: cat /version.json
	I1206 10:21:16.048383  347984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:21:16.048408  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:16.048443  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:16.066986  347984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/flannel-793086/id_rsa Username:docker}
	I1206 10:21:16.068875  347984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/flannel-793086/id_rsa Username:docker}
	I1206 10:21:16.258980  347984 ssh_runner.go:195] Run: systemctl --version
	I1206 10:21:16.265983  347984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:21:16.270479  347984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:21:16.270552  347984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:21:16.298555  347984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 10:21:16.298577  347984 start.go:496] detecting cgroup driver to use...
	I1206 10:21:16.298612  347984 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:21:16.298671  347984 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:21:16.315226  347984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:21:16.328760  347984 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:21:16.328865  347984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:21:16.346163  347984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:21:16.365081  347984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:21:16.480485  347984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:21:16.597661  347984 docker.go:234] disabling docker service ...
	I1206 10:21:16.597725  347984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:21:16.621208  347984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:21:16.634254  347984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:21:16.762879  347984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:21:16.903220  347984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:21:16.917619  347984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:21:16.932591  347984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:21:16.942229  347984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:21:16.951803  347984 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:21:16.951920  347984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:21:16.961243  347984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:21:16.970584  347984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:21:16.979853  347984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:21:16.989567  347984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:21:16.998509  347984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:21:17.011266  347984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:21:17.020647  347984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:21:17.029826  347984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:21:17.037779  347984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:21:17.045666  347984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:21:17.167470  347984 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:21:17.311885  347984 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:21:17.312005  347984 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:21:17.316026  347984 start.go:564] Will wait 60s for crictl version
	I1206 10:21:17.316140  347984 ssh_runner.go:195] Run: which crictl
	I1206 10:21:17.319812  347984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:21:17.348583  347984 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:21:17.348663  347984 ssh_runner.go:195] Run: containerd --version
	I1206 10:21:17.370806  347984 ssh_runner.go:195] Run: containerd --version
	I1206 10:21:17.395566  347984 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1206 10:21:17.398537  347984 cli_runner.go:164] Run: docker network inspect flannel-793086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:21:17.415557  347984 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 10:21:17.419686  347984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:21:17.430393  347984 kubeadm.go:884] updating cluster {Name:flannel-793086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-793086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:21:17.430524  347984 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 10:21:17.430603  347984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:21:17.457184  347984 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:21:17.457211  347984 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:21:17.457273  347984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:21:17.497544  347984 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:21:17.497568  347984 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:21:17.497576  347984 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1206 10:21:17.497665  347984 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-793086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:flannel-793086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1206 10:21:17.497738  347984 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:21:17.526737  347984 cni.go:84] Creating CNI manager for "flannel"
	I1206 10:21:17.526774  347984 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:21:17.526796  347984 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-793086 NodeName:flannel-793086 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:21:17.526917  347984 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "flannel-793086"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:21:17.526989  347984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 10:21:17.535229  347984 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:21:17.535304  347984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:21:17.543250  347984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1206 10:21:17.559782  347984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 10:21:17.573990  347984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1206 10:21:17.587678  347984 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:21:17.591612  347984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:21:17.601464  347984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:21:17.707828  347984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:21:17.724173  347984 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086 for IP: 192.168.85.2
	I1206 10:21:17.724193  347984 certs.go:195] generating shared ca certs ...
	I1206 10:21:17.724210  347984 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:17.724351  347984 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
	I1206 10:21:17.724402  347984 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
	I1206 10:21:17.724409  347984 certs.go:257] generating profile certs ...
	I1206 10:21:17.724466  347984 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/client.key
	I1206 10:21:17.724478  347984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/client.crt with IP's: []
	I1206 10:21:17.886000  347984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/client.crt ...
	I1206 10:21:17.886032  347984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/client.crt: {Name:mk76ee3a56a73cae55cc92c0a69e4707127aa96c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:17.886225  347984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/client.key ...
	I1206 10:21:17.886238  347984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/client.key: {Name:mkb867e85e9a4b373203f1f7f3b1b03fe7a32def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:17.886333  347984 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.key.c3befa8b
	I1206 10:21:17.886350  347984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.crt.c3befa8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 10:21:18.277995  347984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.crt.c3befa8b ...
	I1206 10:21:18.278029  347984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.crt.c3befa8b: {Name:mk637c29e7df8a063ce586aae361d6c48e40bdb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:18.278216  347984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.key.c3befa8b ...
	I1206 10:21:18.278234  347984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.key.c3befa8b: {Name:mk4158a3e97d7089024c2c426f74ac7a7f7718b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:18.278305  347984 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.crt.c3befa8b -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.crt
	I1206 10:21:18.278399  347984 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.key.c3befa8b -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.key
	I1206 10:21:18.278462  347984 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/proxy-client.key
	I1206 10:21:18.278480  347984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/proxy-client.crt with IP's: []
	I1206 10:21:18.445794  347984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/proxy-client.crt ...
	I1206 10:21:18.445823  347984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/proxy-client.crt: {Name:mkc4d57b2530ce4e3a1c46ffccf5219d7bc21aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:18.445995  347984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/proxy-client.key ...
	I1206 10:21:18.446018  347984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/proxy-client.key: {Name:mk3bdcea0c9dc1ce968d0dfb1ee1402102dd0106 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:18.446206  347984 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
	W1206 10:21:18.446253  347984 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
	I1206 10:21:18.446267  347984 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:21:18.446302  347984 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
	I1206 10:21:18.446330  347984 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:21:18.446356  347984 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
	I1206 10:21:18.446404  347984 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
	I1206 10:21:18.447040  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:21:18.466873  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 10:21:18.487569  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:21:18.508036  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:21:18.526669  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 10:21:18.546043  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:21:18.564869  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:21:18.584189  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/flannel-793086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:21:18.602671  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
	I1206 10:21:18.620831  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:21:18.638573  347984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
	I1206 10:21:18.656680  347984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:21:18.670034  347984 ssh_runner.go:195] Run: openssl version
	I1206 10:21:18.676269  347984 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
	I1206 10:21:18.683676  347984 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
	I1206 10:21:18.691575  347984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
	I1206 10:21:18.695466  347984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:38 /usr/share/ca-certificates/42922.pem
	I1206 10:21:18.695554  347984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
	I1206 10:21:18.736651  347984 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:21:18.744384  347984 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
	I1206 10:21:18.752167  347984 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:21:18.760390  347984 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:21:18.767852  347984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:21:18.771806  347984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:21:18.771923  347984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:21:18.813098  347984 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:21:18.820572  347984 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:21:18.828029  347984 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
	I1206 10:21:18.835589  347984 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
	I1206 10:21:18.842998  347984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
	I1206 10:21:18.846809  347984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:38 /usr/share/ca-certificates/4292.pem
	I1206 10:21:18.846911  347984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
	I1206 10:21:18.888675  347984 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:21:18.896299  347984 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
	I1206 10:21:18.903830  347984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:21:18.907255  347984 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 10:21:18.907316  347984 kubeadm.go:401] StartCluster: {Name:flannel-793086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-793086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:21:18.907430  347984 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:21:18.907487  347984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:21:18.934376  347984 cri.go:89] found id: ""
	I1206 10:21:18.934446  347984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:21:18.943063  347984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:21:18.951460  347984 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:21:18.951575  347984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:21:18.959313  347984 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:21:18.959335  347984 kubeadm.go:158] found existing configuration files:
	
	I1206 10:21:18.959479  347984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:21:18.967272  347984 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:21:18.967422  347984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:21:18.976210  347984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:21:18.985236  347984 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:21:18.985338  347984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:21:18.993109  347984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:21:19.002344  347984 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:21:19.002468  347984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:21:19.011847  347984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:21:19.021239  347984 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:21:19.021338  347984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:21:19.029414  347984 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:21:19.071255  347984 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 10:21:19.071404  347984 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:21:19.094849  347984 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:21:19.094933  347984 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:21:19.094974  347984 kubeadm.go:319] OS: Linux
	I1206 10:21:19.095026  347984 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:21:19.095079  347984 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:21:19.095129  347984 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:21:19.095187  347984 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:21:19.095237  347984 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:21:19.095293  347984 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:21:19.095345  347984 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:21:19.095420  347984 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:21:19.095470  347984 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:21:19.169772  347984 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:21:19.169908  347984 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:21:19.170072  347984 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:21:19.179955  347984 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:21:19.186514  347984 out.go:252]   - Generating certificates and keys ...
	I1206 10:21:19.186744  347984 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:21:19.186826  347984 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:21:19.664733  347984 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 10:21:20.341452  347984 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 10:21:21.237378  347984 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 10:21:21.962122  347984 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 10:21:22.208092  347984 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 10:21:22.208248  347984 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-793086 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 10:21:22.700296  347984 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 10:21:22.700428  347984 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-793086 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 10:21:22.848340  347984 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 10:21:23.639622  347984 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 10:21:24.213958  347984 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 10:21:24.214327  347984 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:21:24.898845  347984 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:21:25.141415  347984 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:21:25.259248  347984 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:21:25.535728  347984 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:21:26.277234  347984 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:21:26.278112  347984 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:21:26.281060  347984 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:21:26.284707  347984 out.go:252]   - Booting up control plane ...
	I1206 10:21:26.284828  347984 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:21:26.284912  347984 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:21:26.287047  347984 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:21:26.305203  347984 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:21:26.305572  347984 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:21:26.313181  347984 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:21:26.313574  347984 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:21:26.313825  347984 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:21:26.439996  347984 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:21:26.440113  347984 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:21:27.941287  347984 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501646328s
	I1206 10:21:27.945252  347984 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 10:21:27.945355  347984 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1206 10:21:27.945452  347984 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 10:21:27.945531  347984 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 10:21:30.600535  347984 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.654514884s
	I1206 10:21:32.076043  347984 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.130809325s
	I1206 10:21:33.947091  347984 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001719903s
	I1206 10:21:33.991232  347984 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 10:21:34.015797  347984 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 10:21:34.051430  347984 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 10:21:34.051643  347984 kubeadm.go:319] [mark-control-plane] Marking the node flannel-793086 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 10:21:34.067831  347984 kubeadm.go:319] [bootstrap-token] Using token: 005sxr.gmbtscu6l22rn65a
	I1206 10:21:34.070884  347984 out.go:252]   - Configuring RBAC rules ...
	I1206 10:21:34.071011  347984 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 10:21:34.077781  347984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 10:21:34.098790  347984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 10:21:34.104450  347984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 10:21:34.110888  347984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 10:21:34.115674  347984 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 10:21:34.354771  347984 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 10:21:34.831078  347984 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 10:21:35.355471  347984 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 10:21:35.356743  347984 kubeadm.go:319] 
	I1206 10:21:35.356815  347984 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 10:21:35.356824  347984 kubeadm.go:319] 
	I1206 10:21:35.356898  347984 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 10:21:35.356906  347984 kubeadm.go:319] 
	I1206 10:21:35.356930  347984 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 10:21:35.356989  347984 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 10:21:35.357043  347984 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 10:21:35.357052  347984 kubeadm.go:319] 
	I1206 10:21:35.357104  347984 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 10:21:35.357111  347984 kubeadm.go:319] 
	I1206 10:21:35.357156  347984 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 10:21:35.357163  347984 kubeadm.go:319] 
	I1206 10:21:35.357212  347984 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 10:21:35.357287  347984 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 10:21:35.357360  347984 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 10:21:35.357368  347984 kubeadm.go:319] 
	I1206 10:21:35.357447  347984 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 10:21:35.357523  347984 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 10:21:35.357531  347984 kubeadm.go:319] 
	I1206 10:21:35.357610  347984 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 005sxr.gmbtscu6l22rn65a \
	I1206 10:21:35.357711  347984 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9a3c0c9c90ab0f4223eda0e86927c77df6eeb83b3aa042bddb38493c60751529 \
	I1206 10:21:35.357735  347984 kubeadm.go:319] 	--control-plane 
	I1206 10:21:35.357743  347984 kubeadm.go:319] 
	I1206 10:21:35.357823  347984 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 10:21:35.357830  347984 kubeadm.go:319] 
	I1206 10:21:35.357908  347984 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 005sxr.gmbtscu6l22rn65a \
	I1206 10:21:35.358008  347984 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9a3c0c9c90ab0f4223eda0e86927c77df6eeb83b3aa042bddb38493c60751529 
	I1206 10:21:35.361585  347984 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1206 10:21:35.361810  347984 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:21:35.361920  347984 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:21:35.361940  347984 cni.go:84] Creating CNI manager for "flannel"
	I1206 10:21:35.366993  347984 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I1206 10:21:35.369938  347984 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 10:21:35.374134  347984 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 10:21:35.374155  347984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1206 10:21:35.390439  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 10:21:35.871258  347984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 10:21:35.871334  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:35.871502  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-793086 minikube.k8s.io/updated_at=2025_12_06T10_21_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=flannel-793086 minikube.k8s.io/primary=true
	I1206 10:21:36.068181  347984 ops.go:34] apiserver oom_adj: -16
	I1206 10:21:36.068303  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:36.568899  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:37.069307  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:37.568601  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:38.069111  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:38.568446  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:39.069418  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:39.569052  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:40.068938  347984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:21:40.193264  347984 kubeadm.go:1114] duration metric: took 4.32200152s to wait for elevateKubeSystemPrivileges
	I1206 10:21:40.193300  347984 kubeadm.go:403] duration metric: took 21.285990022s to StartCluster
	I1206 10:21:40.193318  347984 settings.go:142] acquiring lock: {Name:mk09abb9954ca6c9debd2385eb47481a607889e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:40.193385  347984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 10:21:40.194391  347984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/kubeconfig: {Name:mkd703889f0286bb2e17a38f5b6d18daa2a88ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:21:40.194632  347984 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:21:40.194826  347984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 10:21:40.195118  347984 config.go:182] Loaded profile config "flannel-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:21:40.195164  347984 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:21:40.195235  347984 addons.go:70] Setting storage-provisioner=true in profile "flannel-793086"
	I1206 10:21:40.195251  347984 addons.go:239] Setting addon storage-provisioner=true in "flannel-793086"
	I1206 10:21:40.195279  347984 host.go:66] Checking if "flannel-793086" exists ...
	I1206 10:21:40.195899  347984 cli_runner.go:164] Run: docker container inspect flannel-793086 --format={{.State.Status}}
	I1206 10:21:40.196197  347984 addons.go:70] Setting default-storageclass=true in profile "flannel-793086"
	I1206 10:21:40.196220  347984 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-793086"
	I1206 10:21:40.196565  347984 cli_runner.go:164] Run: docker container inspect flannel-793086 --format={{.State.Status}}
	I1206 10:21:40.198624  347984 out.go:179] * Verifying Kubernetes components...
	I1206 10:21:40.201902  347984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:21:40.244487  347984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:21:40.246379  347984 addons.go:239] Setting addon default-storageclass=true in "flannel-793086"
	I1206 10:21:40.246420  347984 host.go:66] Checking if "flannel-793086" exists ...
	I1206 10:21:40.246924  347984 cli_runner.go:164] Run: docker container inspect flannel-793086 --format={{.State.Status}}
	I1206 10:21:40.249254  347984 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:21:40.249276  347984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:21:40.249335  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:40.295677  347984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/flannel-793086/id_rsa Username:docker}
	I1206 10:21:40.298704  347984 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:21:40.298724  347984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:21:40.298785  347984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-793086
	I1206 10:21:40.328521  347984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/flannel-793086/id_rsa Username:docker}
	I1206 10:21:40.498922  347984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 10:21:40.499068  347984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:21:40.605927  347984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:21:40.671880  347984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:21:41.166069  347984 node_ready.go:35] waiting up to 15m0s for node "flannel-793086" to be "Ready" ...
	I1206 10:21:41.166167  347984 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1206 10:21:41.632514  347984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.026479077s)
	I1206 10:21:41.646905  347984 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 10:21:41.650009  347984 addons.go:530] duration metric: took 1.45483152s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 10:21:41.670263  347984 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-793086" context rescaled to 1 replicas
	W1206 10:21:43.169850  347984 node_ready.go:57] node "flannel-793086" has "Ready":"False" status (will retry)
	I1206 10:21:44.674189  347984 node_ready.go:49] node "flannel-793086" is "Ready"
	I1206 10:21:44.674224  347984 node_ready.go:38] duration metric: took 3.508029405s for node "flannel-793086" to be "Ready" ...
	I1206 10:21:44.674261  347984 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:21:44.674381  347984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:21:44.690622  347984 api_server.go:72] duration metric: took 4.495950515s to wait for apiserver process to appear ...
	I1206 10:21:44.690702  347984 api_server.go:88] waiting for apiserver healthz status ...
	I1206 10:21:44.690737  347984 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 10:21:44.702057  347984 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 10:21:44.703162  347984 api_server.go:141] control plane version: v1.34.2
	I1206 10:21:44.703185  347984 api_server.go:131] duration metric: took 12.463084ms to wait for apiserver health ...
	I1206 10:21:44.703195  347984 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 10:21:44.705992  347984 system_pods.go:59] 7 kube-system pods found
	I1206 10:21:44.706026  347984 system_pods.go:61] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:44.706032  347984 system_pods.go:61] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:44.706040  347984 system_pods.go:61] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:21:44.706046  347984 system_pods.go:61] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 10:21:44.706050  347984 system_pods.go:61] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:44.706053  347984 system_pods.go:61] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:44.706058  347984 system_pods.go:61] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:21:44.706064  347984 system_pods.go:74] duration metric: took 2.863754ms to wait for pod list to return data ...
	I1206 10:21:44.706071  347984 default_sa.go:34] waiting for default service account to be created ...
	I1206 10:21:44.708776  347984 default_sa.go:45] found service account: "default"
	I1206 10:21:44.708794  347984 default_sa.go:55] duration metric: took 2.717406ms for default service account to be created ...
	I1206 10:21:44.708803  347984 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 10:21:44.711327  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:44.711449  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:44.711474  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:44.711518  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:21:44.711539  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 10:21:44.711561  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:44.711598  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:44.711617  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:21:44.711696  347984 retry.go:31] will retry after 286.07909ms: missing components: kube-dns
	I1206 10:21:45.003099  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:45.003200  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:45.003224  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:45.003265  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:21:45.003299  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 10:21:45.003319  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:45.003357  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:45.003446  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:21:45.003479  347984 retry.go:31] will retry after 381.600207ms: missing components: kube-dns
	I1206 10:21:45.390570  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:45.390656  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:45.390688  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:45.390726  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:21:45.390763  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 10:21:45.390782  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:45.390819  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:45.390843  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:21:45.390872  347984 retry.go:31] will retry after 347.225205ms: missing components: kube-dns
	I1206 10:21:45.741912  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:45.741947  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:45.741953  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:45.741981  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:21:45.742013  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 10:21:45.742025  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:45.742031  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:45.742037  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:21:45.742080  347984 retry.go:31] will retry after 483.842121ms: missing components: kube-dns
	I1206 10:21:46.230727  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:46.230759  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:46.230766  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:46.230806  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:21:46.230835  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 10:21:46.230851  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:46.230862  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:46.230867  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:21:46.230883  347984 retry.go:31] will retry after 682.177575ms: missing components: kube-dns
	I1206 10:21:46.917022  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:46.917056  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:46.917063  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:46.917072  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:21:46.917077  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:21:46.917081  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:46.917085  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:46.917089  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:21:46.917103  347984 retry.go:31] will retry after 839.949407ms: missing components: kube-dns
	I1206 10:21:47.762013  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:47.762051  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:47.762064  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:47.762072  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running
	I1206 10:21:47.762076  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:21:47.762081  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:47.762085  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:47.762097  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:21:47.762118  347984 retry.go:31] will retry after 803.316372ms: missing components: kube-dns
	I1206 10:21:48.568970  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:48.569007  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:48.569014  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:48.569021  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running
	I1206 10:21:48.569025  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:21:48.569030  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:48.569034  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:48.569038  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:21:48.569052  347984 retry.go:31] will retry after 1.311566576s: missing components: kube-dns
	I1206 10:21:49.885005  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:49.885048  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:49.885056  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:49.885063  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running
	I1206 10:21:49.885068  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:21:49.885074  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:49.885078  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:49.885082  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:21:49.885097  347984 retry.go:31] will retry after 1.150353205s: missing components: kube-dns
	I1206 10:21:51.038807  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:51.038843  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:51.038850  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:51.038857  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running
	I1206 10:21:51.038861  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:21:51.038867  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:51.038871  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:51.038881  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:21:51.038903  347984 retry.go:31] will retry after 1.650064789s: missing components: kube-dns
	I1206 10:21:52.694000  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:52.694035  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:52.694042  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:52.694048  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running
	I1206 10:21:52.694054  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:21:52.694058  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:52.694062  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:52.694066  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:21:52.694081  347984 retry.go:31] will retry after 1.794858109s: missing components: kube-dns
	I1206 10:21:54.493675  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:54.493712  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:54.493721  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:54.493727  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running
	I1206 10:21:54.493731  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:21:54.493735  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:54.493739  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:54.493743  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:21:54.493757  347984 retry.go:31] will retry after 3.158719219s: missing components: kube-dns
	I1206 10:21:57.656906  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:21:57.656935  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:21:57.656942  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:21:57.656950  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running
	I1206 10:21:57.656955  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:21:57.656960  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:21:57.656964  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:21:57.656968  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:21:57.656981  347984 retry.go:31] will retry after 3.190340655s: missing components: kube-dns
	I1206 10:22:00.851261  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:22:00.851294  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:22:00.851303  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:22:00.851310  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running
	I1206 10:22:00.851314  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:22:00.851319  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:22:00.851322  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:22:00.851327  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:22:00.851341  347984 retry.go:31] will retry after 5.059619026s: missing components: kube-dns
	I1206 10:22:05.915188  347984 system_pods.go:86] 7 kube-system pods found
	I1206 10:22:05.915224  347984 system_pods.go:89] "coredns-66bc5c9577-7gqsf" [a6a2e50b-9a3a-428a-a120-75ed0f1dcc81] Running
	I1206 10:22:05.915231  347984 system_pods.go:89] "etcd-flannel-793086" [99e801c3-d921-49ea-9ce5-93c645c0a244] Running
	I1206 10:22:05.915235  347984 system_pods.go:89] "kube-apiserver-flannel-793086" [1a3bafaf-9c8c-4348-bf9c-5e3fb71fd8b1] Running
	I1206 10:22:05.915240  347984 system_pods.go:89] "kube-controller-manager-flannel-793086" [5d227c8d-17b4-4467-9de8-efd68287b8ce] Running
	I1206 10:22:05.915244  347984 system_pods.go:89] "kube-proxy-rhz68" [495233bc-6f44-4024-ab04-b567c4f35e07] Running
	I1206 10:22:05.915248  347984 system_pods.go:89] "kube-scheduler-flannel-793086" [64369d0c-79a6-4d15-af0c-83881c0357d0] Running
	I1206 10:22:05.915252  347984 system_pods.go:89] "storage-provisioner" [2a78f40e-4c35-4b8a-aeaa-81c729d0fdbf] Running
	I1206 10:22:05.915262  347984 system_pods.go:126] duration metric: took 21.206452519s to wait for k8s-apps to be running ...
	I1206 10:22:05.915270  347984 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 10:22:05.915327  347984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:22:05.929606  347984 system_svc.go:56] duration metric: took 14.328093ms WaitForService to wait for kubelet
	I1206 10:22:05.929633  347984 kubeadm.go:587] duration metric: took 25.734968851s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:22:05.929651  347984 node_conditions.go:102] verifying NodePressure condition ...
	I1206 10:22:05.932704  347984 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 10:22:05.932747  347984 node_conditions.go:123] node cpu capacity is 2
	I1206 10:22:05.932763  347984 node_conditions.go:105] duration metric: took 3.106513ms to run NodePressure ...
	I1206 10:22:05.932775  347984 start.go:242] waiting for startup goroutines ...
	I1206 10:22:05.932783  347984 start.go:247] waiting for cluster config update ...
	I1206 10:22:05.932794  347984 start.go:256] writing updated cluster config ...
	I1206 10:22:05.933093  347984 ssh_runner.go:195] Run: rm -f paused
	I1206 10:22:05.936989  347984 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:22:05.941082  347984 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7gqsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:05.952816  347984 pod_ready.go:94] pod "coredns-66bc5c9577-7gqsf" is "Ready"
	I1206 10:22:05.952844  347984 pod_ready.go:86] duration metric: took 11.734569ms for pod "coredns-66bc5c9577-7gqsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:05.956494  347984 pod_ready.go:83] waiting for pod "etcd-flannel-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:05.961645  347984 pod_ready.go:94] pod "etcd-flannel-793086" is "Ready"
	I1206 10:22:05.961679  347984 pod_ready.go:86] duration metric: took 5.155458ms for pod "etcd-flannel-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:05.964239  347984 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:05.969203  347984 pod_ready.go:94] pod "kube-apiserver-flannel-793086" is "Ready"
	I1206 10:22:05.969270  347984 pod_ready.go:86] duration metric: took 5.007224ms for pod "kube-apiserver-flannel-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:05.971565  347984 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:06.340901  347984 pod_ready.go:94] pod "kube-controller-manager-flannel-793086" is "Ready"
	I1206 10:22:06.340934  347984 pod_ready.go:86] duration metric: took 369.303059ms for pod "kube-controller-manager-flannel-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:06.541395  347984 pod_ready.go:83] waiting for pod "kube-proxy-rhz68" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:06.941864  347984 pod_ready.go:94] pod "kube-proxy-rhz68" is "Ready"
	I1206 10:22:06.941890  347984 pod_ready.go:86] duration metric: took 400.46819ms for pod "kube-proxy-rhz68" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:07.141141  347984 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:07.542076  347984 pod_ready.go:94] pod "kube-scheduler-flannel-793086" is "Ready"
	I1206 10:22:07.542104  347984 pod_ready.go:86] duration metric: took 400.891439ms for pod "kube-scheduler-flannel-793086" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:22:07.542117  347984 pod_ready.go:40] duration metric: took 1.605091809s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:22:07.613573  347984 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 10:22:07.616848  347984 out.go:179] * Done! kubectl is now configured to use "flannel-793086" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580837118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580855563Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580885274Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580900995Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580911087Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580921853Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580931149Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580945311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580962436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.580998678Z" level=info msg="Connect containerd service"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.581274307Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.581881961Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598029541Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598099063Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598123851Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.598177546Z" level=info msg="Start recovering state"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621091913Z" level=info msg="Start event monitor"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621277351Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621341351Z" level=info msg="Start streaming server"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621405639Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621464397Z" level=info msg="runtime interface starting up..."
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621515523Z" level=info msg="starting plugins..."
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.621595705Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:02:56 no-preload-257359 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:02:56 no-preload-257359 containerd[556]: time="2025-12-06T10:02:56.623695007Z" level=info msg="containerd successfully booted in 0.067121s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:22:26.751041   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:22:26.751790   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:22:26.753458   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:22:26.753878   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 10:22:26.755368   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	
	
	==> kernel <==
	 10:22:26 up  2:04,  0 user,  load average: 2.66, 2.26, 1.75
	Linux no-preload-257359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:22:23 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:22:24 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1554.
	Dec 06 10:22:24 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:22:24 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:22:24 no-preload-257359 kubelet[10138]: E1206 10:22:24.537189   10138 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:22:24 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:22:24 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:22:25 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1555.
	Dec 06 10:22:25 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:22:25 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:22:25 no-preload-257359 kubelet[10144]: E1206 10:22:25.264459   10144 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:22:25 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:22:25 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:22:25 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1556.
	Dec 06 10:22:25 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:22:25 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:22:26 no-preload-257359 kubelet[10171]: E1206 10:22:26.045880   10171 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:22:26 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:22:26 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:22:26 no-preload-257359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1557.
	Dec 06 10:22:26 no-preload-257359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:22:26 no-preload-257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:22:26 no-preload-257359 kubelet[10275]: E1206 10:22:26.770233   10275 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:22:26 no-preload-257359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:22:26 no-preload-257359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-257359 -n no-preload-257359: exit status 2 (346.711885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-257359" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (263.36s)
E1206 10:22:57.331082    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:23:00.155569    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:23:20.637079    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.66
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 3.81
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.4
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.65
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 173.69
38 TestAddons/serial/Volcano 41.81
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 9.86
44 TestAddons/parallel/Registry 16.12
45 TestAddons/parallel/RegistryCreds 0.76
46 TestAddons/parallel/Ingress 20.9
47 TestAddons/parallel/InspektorGadget 11.78
48 TestAddons/parallel/MetricsServer 7.1
50 TestAddons/parallel/CSI 66.18
51 TestAddons/parallel/Headlamp 17.07
52 TestAddons/parallel/CloudSpanner 5.97
53 TestAddons/parallel/LocalPath 9.86
54 TestAddons/parallel/NvidiaDevicePlugin 6.85
55 TestAddons/parallel/Yakd 10.86
57 TestAddons/StoppedEnableDisable 12.35
58 TestCertOptions 40.85
59 TestCertExpiration 222.8
61 TestForceSystemdFlag 37.31
62 TestForceSystemdEnv 36.87
63 TestDockerEnvContainerd 48.53
67 TestErrorSpam/setup 33.52
68 TestErrorSpam/start 0.85
69 TestErrorSpam/status 1.1
70 TestErrorSpam/pause 1.72
71 TestErrorSpam/unpause 1.9
72 TestErrorSpam/stop 1.65
75 TestFunctional/serial/CopySyncFile 0.01
76 TestFunctional/serial/StartWithProxy 48.42
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.71
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.12
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.66
84 TestFunctional/serial/CacheCmd/cache/add_local 1.35
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 43.17
93 TestFunctional/serial/ComponentHealth 0.12
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.5
96 TestFunctional/serial/InvalidService 4.33
98 TestFunctional/parallel/ConfigCmd 0.49
99 TestFunctional/parallel/DashboardCmd 7.66
100 TestFunctional/parallel/DryRun 0.45
101 TestFunctional/parallel/InternationalLanguage 0.28
102 TestFunctional/parallel/StatusCmd 1.1
106 TestFunctional/parallel/ServiceCmdConnect 8.69
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 24.58
110 TestFunctional/parallel/SSHCmd 0.89
111 TestFunctional/parallel/CpCmd 1.86
113 TestFunctional/parallel/FileSync 0.29
114 TestFunctional/parallel/CertSync 1.74
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.84
122 TestFunctional/parallel/License 0.33
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
125 TestFunctional/parallel/Version/short 0.06
126 TestFunctional/parallel/Version/components 1.47
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.45
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
134 TestFunctional/parallel/ImageCommands/ImageBuild 4.78
135 TestFunctional/parallel/ImageCommands/Setup 0.63
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.41
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.32
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/parallel/MountCmd/any-port 8.47
153 TestFunctional/parallel/MountCmd/specific-port 2.39
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.02
155 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
157 TestFunctional/parallel/ProfileCmd/profile_list 0.44
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
159 TestFunctional/parallel/ServiceCmd/List 0.69
160 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.64
162 TestFunctional/parallel/ServiceCmd/Format 0.6
163 TestFunctional/parallel/ServiceCmd/URL 0.44
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.42
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.05
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.87
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.99
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.99
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.44
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.73
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.45
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.71
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.58
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.24
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.38
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.87
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.15
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.5
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.65
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.29
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.1
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.34
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.49
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.68
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.41
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.17
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 153.14
265 TestMultiControlPlane/serial/DeployApp 7.92
266 TestMultiControlPlane/serial/PingHostFromPods 1.78
267 TestMultiControlPlane/serial/AddWorkerNode 59.47
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.15
270 TestMultiControlPlane/serial/CopyFile 21.1
271 TestMultiControlPlane/serial/StopSecondaryNode 13.02
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
273 TestMultiControlPlane/serial/RestartSecondaryNode 14.04
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.12
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.44
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.19
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
278 TestMultiControlPlane/serial/StopCluster 36.43
279 TestMultiControlPlane/serial/RestartCluster 63.24
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
281 TestMultiControlPlane/serial/AddSecondaryNode 64.1
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.15
287 TestJSONOutput/start/Command 47.39
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.75
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.65
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 6.05
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.28
312 TestKicCustomNetwork/create_custom_network 39.2
313 TestKicCustomNetwork/use_default_bridge_network 35.24
314 TestKicExistingNetwork 35.66
315 TestKicCustomSubnet 38.44
316 TestKicStaticIP 36.07
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 70.32
321 TestMountStart/serial/StartWithMountFirst 8.78
322 TestMountStart/serial/VerifyMountFirst 0.29
323 TestMountStart/serial/StartWithMountSecond 8.28
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.29
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 8.05
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 108.08
333 TestMultiNode/serial/DeployApp2Nodes 5.9
334 TestMultiNode/serial/PingHostFrom2Pods 1.03
335 TestMultiNode/serial/AddNode 28.25
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.82
338 TestMultiNode/serial/CopyFile 10.71
339 TestMultiNode/serial/StopNode 2.42
340 TestMultiNode/serial/StartAfterStop 7.67
341 TestMultiNode/serial/RestartKeepsNodes 73
342 TestMultiNode/serial/DeleteNode 5.7
343 TestMultiNode/serial/StopMultiNode 24.18
344 TestMultiNode/serial/RestartMultiNode 52.02
345 TestMultiNode/serial/ValidateNameConflict 36.35
350 TestPreload 116.45
352 TestScheduledStopUnix 109.22
355 TestInsufficientStorage 12.46
356 TestRunningBinaryUpgrade 310.5
359 TestMissingContainerUpgrade 128.68
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 47.03
363 TestNoKubernetes/serial/StartWithStopK8s 25.07
364 TestNoKubernetes/serial/Start 7.46
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
367 TestNoKubernetes/serial/ProfileList 0.72
368 TestNoKubernetes/serial/Stop 1.29
369 TestNoKubernetes/serial/StartNoArgs 6.78
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
371 TestStoppedBinaryUpgrade/Setup 1.14
372 TestStoppedBinaryUpgrade/Upgrade 55.06
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.01
382 TestPause/serial/Start 50.95
383 TestPause/serial/SecondStartNoReconfiguration 6.22
384 TestPause/serial/Pause 0.74
385 TestPause/serial/VerifyStatus 0.37
386 TestPause/serial/Unpause 0.62
387 TestPause/serial/PauseAgain 0.94
388 TestPause/serial/DeletePaused 2.94
389 TestPause/serial/VerifyDeletedResources 0.4
397 TestNetworkPlugins/group/false 3.75
402 TestStartStop/group/old-k8s-version/serial/FirstStart 64.47
404 TestStartStop/group/embed-certs/serial/FirstStart 86.72
405 TestStartStop/group/old-k8s-version/serial/DeployApp 9.5
406 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
407 TestStartStop/group/old-k8s-version/serial/Stop 12.2
408 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
409 TestStartStop/group/old-k8s-version/serial/SecondStart 27.01
410 TestStartStop/group/embed-certs/serial/DeployApp 10.42
411 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
412 TestStartStop/group/embed-certs/serial/Stop 12.54
413 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 11
414 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
415 TestStartStop/group/embed-certs/serial/SecondStart 55.67
416 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
417 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
418 TestStartStop/group/old-k8s-version/serial/Pause 4.97
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
424 TestStartStop/group/embed-certs/serial/Pause 3.16
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.93
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.14
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.3
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.2
440 TestStartStop/group/no-preload/serial/Stop 1.32
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.36
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
453 TestNetworkPlugins/group/auto/Start 79.37
454 TestNetworkPlugins/group/auto/KubeletFlags 0.31
455 TestNetworkPlugins/group/auto/NetCatPod 10.3
456 TestNetworkPlugins/group/auto/DNS 0.19
457 TestNetworkPlugins/group/auto/Localhost 0.15
458 TestNetworkPlugins/group/auto/HairPin 0.21
459 TestNetworkPlugins/group/kindnet/Start 79.33
460 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
462 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
463 TestNetworkPlugins/group/kindnet/DNS 0.18
464 TestNetworkPlugins/group/kindnet/Localhost 0.16
465 TestNetworkPlugins/group/kindnet/HairPin 0.18
466 TestNetworkPlugins/group/calico/Start 60.54
467 TestNetworkPlugins/group/calico/ControllerPod 6.01
468 TestNetworkPlugins/group/calico/KubeletFlags 0.31
469 TestNetworkPlugins/group/calico/NetCatPod 9.27
470 TestNetworkPlugins/group/calico/DNS 0.22
471 TestNetworkPlugins/group/calico/Localhost 0.17
472 TestNetworkPlugins/group/calico/HairPin 0.15
474 TestNetworkPlugins/group/custom-flannel/Start 58.45
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
477 TestNetworkPlugins/group/custom-flannel/DNS 0.18
478 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
479 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
480 TestNetworkPlugins/group/enable-default-cni/Start 42.74
481 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
482 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
483 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
484 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
485 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
486 TestNetworkPlugins/group/flannel/Start 61.9
487 TestNetworkPlugins/group/flannel/ControllerPod 6.01
488 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
489 TestNetworkPlugins/group/flannel/NetCatPod 10.29
490 TestNetworkPlugins/group/flannel/DNS 0.18
491 TestNetworkPlugins/group/flannel/Localhost 0.15
492 TestNetworkPlugins/group/flannel/HairPin 0.15
493 TestNetworkPlugins/group/bridge/Start 72.5
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
495 TestNetworkPlugins/group/bridge/NetCatPod 9.28
496 TestNetworkPlugins/group/bridge/DNS 0.17
497 TestNetworkPlugins/group/bridge/Localhost 0.14
498 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (5.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-892191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-892191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.663551748s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 08:28:32.232073    4292 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1206 08:28:32.232158    4292 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-892191
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-892191: exit status 85 (60.080315ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-892191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-892191 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:26.611762    4298 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:26.612000    4298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:26.612034    4298 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:26.612058    4298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:26.612334    4298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	W1206 08:28:26.612491    4298 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22049-2448/.minikube/config/config.json: open /home/jenkins/minikube-integration/22049-2448/.minikube/config/config.json: no such file or directory
	I1206 08:28:26.612917    4298 out.go:368] Setting JSON to true
	I1206 08:28:26.613703    4298 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":658,"bootTime":1765009049,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:28:26.613797    4298 start.go:143] virtualization:  
	I1206 08:28:26.619552    4298 out.go:99] [download-only-892191] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1206 08:28:26.619738    4298 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 08:28:26.619788    4298 notify.go:221] Checking for updates...
	I1206 08:28:26.622911    4298 out.go:171] MINIKUBE_LOCATION=22049
	I1206 08:28:26.626144    4298 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:26.629140    4298 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:28:26.632066    4298 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:28:26.635257    4298 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 08:28:26.641143    4298 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 08:28:26.641399    4298 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:26.668272    4298 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:28:26.668380    4298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:27.082275    4298 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-06 08:28:27.072549779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:28:27.082375    4298 docker.go:319] overlay module found
	I1206 08:28:27.085291    4298 out.go:99] Using the docker driver based on user configuration
	I1206 08:28:27.085331    4298 start.go:309] selected driver: docker
	I1206 08:28:27.085340    4298 start.go:927] validating driver "docker" against <nil>
	I1206 08:28:27.085434    4298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:27.154186    4298 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-06 08:28:27.145021221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:28:27.154335    4298 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:27.154632    4298 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 08:28:27.154826    4298 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 08:28:27.158059    4298 out.go:171] Using Docker driver with root privileges
	I1206 08:28:27.160912    4298 cni.go:84] Creating CNI manager for ""
	I1206 08:28:27.160985    4298 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 08:28:27.161003    4298 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 08:28:27.161087    4298 start.go:353] cluster config:
	{Name:download-only-892191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-892191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:28:27.164075    4298 out.go:99] Starting "download-only-892191" primary control-plane node in "download-only-892191" cluster
	I1206 08:28:27.164094    4298 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 08:28:27.166964    4298 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1206 08:28:27.167015    4298 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1206 08:28:27.167170    4298 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 08:28:27.183067    4298 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 08:28:27.183273    4298 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 08:28:27.183402    4298 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 08:28:27.216153    4298 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:28:27.216184    4298 cache.go:65] Caching tarball of preloaded images
	I1206 08:28:27.216351    4298 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1206 08:28:27.219728    4298 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1206 08:28:27.219757    4298 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1206 08:28:27.299845    4298 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1206 08:28:27.300007    4298 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1206 08:28:30.605945    4298 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1206 08:28:30.606314    4298 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/download-only-892191/config.json ...
	I1206 08:28:30.606348    4298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/download-only-892191/config.json: {Name:mk263758c1c6cfd2c764c1a331fac8d18d1efc05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:28:30.606501    4298 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1206 08:28:30.606665    4298 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22049-2448/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-892191 host does not exist
	  To start a cluster, run: "minikube start -p download-only-892191"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-892191
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-031077 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-031077 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.80885147s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 08:28:36.452193    4292 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1206 08:28:36.452228    4292 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-031077
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-031077: exit status 85 (88.220502ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-892191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-892191 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-892191                                                                                                                                                               │ download-only-892191 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-031077 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-031077 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:32.683132    4496 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:32.683303    4496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:32.683333    4496 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:32.683358    4496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:32.683668    4496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:28:32.684087    4496 out.go:368] Setting JSON to true
	I1206 08:28:32.684846    4496 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":664,"bootTime":1765009049,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:28:32.684945    4496 start.go:143] virtualization:  
	I1206 08:28:32.686803    4496 out.go:99] [download-only-031077] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:28:32.687012    4496 notify.go:221] Checking for updates...
	I1206 08:28:32.688260    4496 out.go:171] MINIKUBE_LOCATION=22049
	I1206 08:28:32.689733    4496 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:32.691298    4496 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:28:32.692715    4496 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:28:32.693996    4496 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 08:28:32.696687    4496 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 08:28:32.696942    4496 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:32.717538    4496 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:28:32.717643    4496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:32.786297    4496 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-06 08:28:32.776864169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:28:32.786406    4496 docker.go:319] overlay module found
	I1206 08:28:32.787773    4496 out.go:99] Using the docker driver based on user configuration
	I1206 08:28:32.787806    4496 start.go:309] selected driver: docker
	I1206 08:28:32.787812    4496 start.go:927] validating driver "docker" against <nil>
	I1206 08:28:32.787920    4496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:32.840069    4496 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-06 08:28:32.830798791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:28:32.840227    4496 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:32.840535    4496 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 08:28:32.840682    4496 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 08:28:32.841981    4496 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-031077 host does not exist
	  To start a cluster, run: "minikube start -p download-only-031077"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-031077
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-944992 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-944992 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.39554729s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 08:28:40.288837    4292 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1206 08:28:40.288871    4292 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-944992
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-944992: exit status 85 (88.948058ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-892191 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-892191 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-892191                                                                                                                                                                      │ download-only-892191 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-031077 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-031077 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-031077                                                                                                                                                                      │ download-only-031077 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-944992 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-944992 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:36.934931    4691 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:36.935099    4691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:36.935112    4691 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:36.935117    4691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:36.935362    4691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:28:36.935771    4691 out.go:368] Setting JSON to true
	I1206 08:28:36.936498    4691 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":668,"bootTime":1765009049,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:28:36.936565    4691 start.go:143] virtualization:  
	I1206 08:28:36.939958    4691 out.go:99] [download-only-944992] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:28:36.940189    4691 notify.go:221] Checking for updates...
	I1206 08:28:36.943219    4691 out.go:171] MINIKUBE_LOCATION=22049
	I1206 08:28:36.946182    4691 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:36.949036    4691 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:28:36.951881    4691 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:28:36.954867    4691 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 08:28:36.960588    4691 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 08:28:36.960840    4691 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:36.990546    4691 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:28:36.990649    4691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:37.051337    4691 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 08:28:37.041346971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:28:37.051580    4691 docker.go:319] overlay module found
	I1206 08:28:37.054575    4691 out.go:99] Using the docker driver based on user configuration
	I1206 08:28:37.054615    4691 start.go:309] selected driver: docker
	I1206 08:28:37.054623    4691 start.go:927] validating driver "docker" against <nil>
	I1206 08:28:37.054736    4691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:28:37.112345    4691 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 08:28:37.103409074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:28:37.112503    4691 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:37.112793    4691 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 08:28:37.112949    4691 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 08:28:37.115950    4691 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-944992 host does not exist
	  To start a cluster, run: "minikube start -p download-only-944992"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-944992
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 08:28:41.599766    4292 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-275726 --alsologtostderr --binary-mirror http://127.0.0.1:32785 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-275726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-275726
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-962295
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-962295: exit status 85 (82.86908ms)

                                                
                                                
-- stdout --
	* Profile "addons-962295" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-962295"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-962295
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-962295: exit status 85 (84.076003ms)

                                                
                                                
-- stdout --
	* Profile "addons-962295" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-962295"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (173.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-962295 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-962295 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m53.685627338s)
--- PASS: TestAddons/Setup (173.69s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.81s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 66.610453ms
addons_test.go:884: volcano-controller stabilized in 66.686752ms
addons_test.go:876: volcano-admission stabilized in 66.780544ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-wlj4p" [518889cc-d05a-440a-a854-1f7f541e053b] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004967532s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-f9dlg" [b2e7648e-9754-4cc8-b187-4cec0de54c76] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003665358s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-vls86" [22f579ba-4dfe-43d3-b118-cbb8d49adfb4] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004224258s
addons_test.go:903: (dbg) Run:  kubectl --context addons-962295 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-962295 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-962295 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [adf72149-ae53-43d6-b77d-3f9a1f2506d3] Pending
helpers_test.go:352: "test-job-nginx-0" [adf72149-ae53-43d6-b77d-3f9a1f2506d3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [adf72149-ae53-43d6-b77d-3f9a1f2506d3] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.004277287s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-962295 addons disable volcano --alsologtostderr -v=1: (11.96726555s)
--- PASS: TestAddons/serial/Volcano (41.81s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-962295 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-962295 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-962295 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-962295 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [502f4c93-108b-40ac-80e3-b8bf72027011] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [502f4c93-108b-40ac-80e3-b8bf72027011] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003705647s
addons_test.go:694: (dbg) Run:  kubectl --context addons-962295 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-962295 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-962295 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-962295 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.00625ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-d6lzx" [af87c9a1-dddf-4bff-84ef-7eca940fff6d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007912501s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2gfsj" [8a32a26e-7345-4f4e-9187-7d69cb5833fa] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004053107s
addons_test.go:392: (dbg) Run:  kubectl --context addons-962295 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-962295 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-962295 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.083570694s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 ip
2025/12/06 08:32:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.12s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.568107ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-962295
addons_test.go:332: (dbg) Run:  kubectl --context addons-962295 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-962295 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-962295 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-962295 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a921a066-1db3-4f91-9877-dc63441f14a5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a921a066-1db3-4f91-9877-dc63441f14a5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003889531s
I1206 08:33:46.363142    4292 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-962295 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-962295 addons disable ingress-dns --alsologtostderr -v=1: (1.957200404s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-962295 addons disable ingress --alsologtostderr -v=1: (7.946106799s)
--- PASS: TestAddons/parallel/Ingress (20.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-5w8jp" [5550e5f1-5add-4280-b940-edf26feef1de] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003127589s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-962295 addons disable inspektor-gadget --alsologtostderr -v=1: (5.772762718s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.1s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.207638ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-g5qdz" [fd1d9200-672d-4b84-97a5-ad19b7b52514] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003164905s
addons_test.go:463: (dbg) Run:  kubectl --context addons-962295 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.10s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1206 08:33:03.039711    4292 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 08:33:03.048594    4292 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 08:33:03.048621    4292 kapi.go:107] duration metric: took 12.66645ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 12.676616ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-962295 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-962295 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [61aacac7-eee2-4f4d-8dcf-024a91661f48] Pending
helpers_test.go:352: "task-pv-pod" [61aacac7-eee2-4f4d-8dcf-024a91661f48] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.002969422s
addons_test.go:572: (dbg) Run:  kubectl --context addons-962295 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-962295 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-962295 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-962295 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-962295 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-962295 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-962295 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [2bf5e9dc-1a11-4f61-87a0-937192e9edf8] Pending
helpers_test.go:352: "task-pv-pod-restore" [2bf5e9dc-1a11-4f61-87a0-937192e9edf8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [2bf5e9dc-1a11-4f61-87a0-937192e9edf8] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.013335752s
addons_test.go:614: (dbg) Run:  kubectl --context addons-962295 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-962295 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-962295 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-962295 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.983018886s)
--- PASS: TestAddons/parallel/CSI (66.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-962295 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-962295 --alsologtostderr -v=1: (1.241160976s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-n5z99" [42b2788a-10ef-4e17-9e97-cb29221d4d2c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-n5z99" [42b2788a-10ef-4e17-9e97-cb29221d4d2c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004164336s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-962295 addons disable headlamp --alsologtostderr -v=1: (5.823354794s)
--- PASS: TestAddons/parallel/Headlamp (17.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-pf76s" [ffb28521-5919-4a04-a9b7-836fac4f3307] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005157964s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.97s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.86s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-962295 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-962295 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-962295 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [15620392-ea9b-41a2-a049-f32c7decb5aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [15620392-ea9b-41a2-a049-f32c7decb5aa] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [15620392-ea9b-41a2-a049-f32c7decb5aa] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002864657s
addons_test.go:967: (dbg) Run:  kubectl --context addons-962295 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 ssh "cat /opt/local-path-provisioner/pvc-6ee8bbdd-4db8-40a5-ac45-1806450fcd72_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-962295 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-962295 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.86s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-h5z7x" [cbaf74d0-27f0-4434-ab88-ae439655a532] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003560529s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-gzfvn" [f7b11f1d-d72e-4af0-9987-998c24525d84] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006959248s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-962295 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-962295 addons disable yakd --alsologtostderr -v=1: (5.850930124s)
--- PASS: TestAddons/parallel/Yakd (10.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-962295
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-962295: (12.078335176s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-962295
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-962295
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-962295
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestCertOptions (40.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-117308 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-117308 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (37.656846294s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-117308 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-117308 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-117308 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-117308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-117308
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-117308: (2.176989647s)
--- PASS: TestCertOptions (40.85s)

                                                
                                    
x
+
TestCertExpiration (222.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-980262 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1206 09:45:55.754684    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-980262 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (33.331772599s)
E1206 09:46:36.062161    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:47:57.330906    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:49:20.402419    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-980262 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-980262 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.963905823s)
helpers_test.go:175: Cleaning up "cert-expiration-980262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-980262
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-980262: (2.50211919s)
--- PASS: TestCertExpiration (222.80s)

                                                
                                    
x
+
TestForceSystemdFlag (37.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-346826 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-346826 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.514138077s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-346826 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-346826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-346826
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-346826: (2.465241573s)
--- PASS: TestForceSystemdFlag (37.31s)

                                                
                                    
x
+
TestForceSystemdEnv (36.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-003791 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-003791 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.451103286s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-003791 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-003791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-003791
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-003791: (2.110576883s)
--- PASS: TestForceSystemdEnv (36.87s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.53s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-157041 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-157041 --driver=docker  --container-runtime=containerd: (32.622363231s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-157041"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-157041": (1.120295588s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bYa2YYIoFKWI/agent.24098" SSH_AGENT_PID="24099" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bYa2YYIoFKWI/agent.24098" SSH_AGENT_PID="24099" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bYa2YYIoFKWI/agent.24098" SSH_AGENT_PID="24099" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.319501127s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bYa2YYIoFKWI/agent.24098" SSH_AGENT_PID="24099" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-157041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-157041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-157041: (2.082963834s)
--- PASS: TestDockerEnvContainerd (48.53s)

                                                
                                    
x
+
TestErrorSpam/setup (33.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-058318 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-058318 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-058318 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-058318 --driver=docker  --container-runtime=containerd: (33.523498972s)
--- PASS: TestErrorSpam/setup (33.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 stop: (1.437796651s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-058318 --log_dir /tmp/nospam-058318 stop
--- PASS: TestErrorSpam/stop (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-181746 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1206 08:36:36.064512    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:36.071268    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:36.083801    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:36.105183    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:36.146544    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:36.227906    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:36.389234    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:36.711495    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:37.353594    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:38.635098    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:41.197415    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:46.319527    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-181746 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.415541518s)
--- PASS: TestFunctional/serial/StartWithProxy (48.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 08:36:50.127876    4292 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-181746 --alsologtostderr -v=8
E1206 08:36:56.561888    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-181746 --alsologtostderr -v=8: (7.706958302s)
functional_test.go:678: soft start took 7.711227141s for "functional-181746" cluster.
I1206 08:36:57.835193    4292 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-181746 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-181746 cache add registry.k8s.io/pause:3.1: (1.255693868s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-181746 cache add registry.k8s.io/pause:3.3: (1.364240532s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-181746 cache add registry.k8s.io/pause:latest: (1.043532805s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-181746 /tmp/TestFunctionalserialCacheCmdcacheadd_local1201282949/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 cache add minikube-local-cache-test:functional-181746
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 cache delete minikube-local-cache-test:functional-181746
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-181746
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.669661ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 kubectl -- --context functional-181746 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-181746 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-181746 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 08:37:17.043530    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-181746 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.167786069s)
functional_test.go:776: restart took 43.167881535s for "functional-181746" cluster.
I1206 08:37:48.889580    4292 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (43.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-181746 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-181746 logs: (1.466822851s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 logs --file /tmp/TestFunctionalserialLogsFileCmd379112844/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-181746 logs --file /tmp/TestFunctionalserialLogsFileCmd379112844/001/logs.txt: (1.497388346s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-181746 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-181746
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-181746: exit status 115 (453.231987ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30281 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-181746 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 config get cpus: exit status 14 (72.445763ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 config get cpus: exit status 14 (134.694478ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-181746 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-181746 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 41440: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (199.674494ms)

                                                
                                                
-- stdout --
	* [functional-181746] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:38:34.866581   40693 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:38:34.866710   40693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:34.866808   40693 out.go:374] Setting ErrFile to fd 2...
	I1206 08:38:34.866816   40693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:34.867064   40693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:38:34.867478   40693 out.go:368] Setting JSON to false
	I1206 08:38:34.868369   40693 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1266,"bootTime":1765009049,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:38:34.868438   40693 start.go:143] virtualization:  
	I1206 08:38:34.871895   40693 out.go:179] * [functional-181746] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 08:38:34.874954   40693 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:38:34.875041   40693 notify.go:221] Checking for updates...
	I1206 08:38:34.881551   40693 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:38:34.884466   40693 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:38:34.887595   40693 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:38:34.890490   40693 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:38:34.893411   40693 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:38:34.896688   40693 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 08:38:34.897376   40693 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:38:34.928877   40693 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:38:34.928990   40693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:38:34.996020   40693 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 08:38:34.982347886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:38:34.996155   40693 docker.go:319] overlay module found
	I1206 08:38:34.999558   40693 out.go:179] * Using the docker driver based on existing profile
	I1206 08:38:35.002460   40693 start.go:309] selected driver: docker
	I1206 08:38:35.002487   40693 start.go:927] validating driver "docker" against &{Name:functional-181746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-181746 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:38:35.002604   40693 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:38:35.006257   40693 out.go:203] 
	W1206 08:38:35.009158   40693 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 08:38:35.012104   40693 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-181746 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (280.886867ms)

                                                
                                                
-- stdout --
	* [functional-181746] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:38:36.426984   41035 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:38:36.427177   41035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:36.427191   41035 out.go:374] Setting ErrFile to fd 2...
	I1206 08:38:36.427198   41035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:38:36.430793   41035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 08:38:36.431260   41035 out.go:368] Setting JSON to false
	I1206 08:38:36.432275   41035 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1268,"bootTime":1765009049,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 08:38:36.432348   41035 start.go:143] virtualization:  
	I1206 08:38:36.436017   41035 out.go:179] * [functional-181746] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 08:38:36.440039   41035 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:38:36.440258   41035 notify.go:221] Checking for updates...
	I1206 08:38:36.445964   41035 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:38:36.449008   41035 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 08:38:36.451896   41035 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 08:38:36.454886   41035 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 08:38:36.460373   41035 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:38:36.463854   41035 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 08:38:36.464461   41035 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:38:36.520913   41035 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 08:38:36.521030   41035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 08:38:36.619993   41035 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 08:38:36.608766379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 08:38:36.620102   41035 docker.go:319] overlay module found
	I1206 08:38:36.625454   41035 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 08:38:36.628322   41035 start.go:309] selected driver: docker
	I1206 08:38:36.628349   41035 start.go:927] validating driver "docker" against &{Name:functional-181746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-181746 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:38:36.628468   41035 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:38:36.632100   41035 out.go:203] 
	W1206 08:38:36.635510   41035 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 08:38:36.642514   41035 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-181746 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-181746 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9qbq7" [9e3d21eb-df57-408f-a545-cfae32ef22f3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-9qbq7" [9e3d21eb-df57-408f-a545-cfae32ef22f3] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003175761s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32398
functional_test.go:1680: http://192.168.49.2:32398: success! body:
Request served by hello-node-connect-7d85dfc575-9qbq7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32398
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b20f1fc3-0574-4a61-aa54-7ab75eadff0f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005569921s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-181746 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-181746 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-181746 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-181746 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [27167e36-3ff0-46ca-9b08-47533ff9f805] Pending
helpers_test.go:352: "sp-pod" [27167e36-3ff0-46ca-9b08-47533ff9f805] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [27167e36-3ff0-46ca-9b08-47533ff9f805] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00336255s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-181746 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-181746 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-181746 delete -f testdata/storage-provisioner/pod.yaml: (1.527829681s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-181746 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b3979fb5-6ece-417f-8b84-966c81bf1f06] Pending
helpers_test.go:352: "sp-pod" [b3979fb5-6ece-417f-8b84-966c81bf1f06] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003676579s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-181746 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh -n functional-181746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 cp functional-181746:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2660383230/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh -n functional-181746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh -n functional-181746 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4292/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo cat /etc/test/nested/copy/4292/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4292.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo cat /etc/ssl/certs/4292.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4292.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo cat /usr/share/ca-certificates/4292.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/42922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo cat /etc/ssl/certs/42922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/42922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo cat /usr/share/ca-certificates/42922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-181746 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 ssh "sudo systemctl is-active docker": exit status 1 (465.156825ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 ssh "sudo systemctl is-active crio": exit status 1 (379.553856ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-181746 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-181746 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-181746 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 35998: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-181746 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-181746 version -o=json --components: (1.465695247s)
--- PASS: TestFunctional/parallel/Version/components (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-181746 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-181746 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e86d5ec9-a322-4482-ba3c-398cef25e0c7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1206 08:37:58.007455    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "nginx-svc" [e86d5ec9-a322-4482-ba3c-398cef25e0c7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005765332s
I1206 08:38:07.782621    4292 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-181746 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-181746
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-181746
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-181746 image ls --format short --alsologtostderr:
I1206 08:38:40.883358   41846 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:40.883583   41846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:40.883604   41846 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:40.883629   41846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:40.883897   41846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 08:38:40.884524   41846 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:40.884677   41846 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:40.885226   41846 cli_runner.go:164] Run: docker container inspect functional-181746 --format={{.State.Status}}
I1206 08:38:40.907683   41846 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:40.907738   41846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181746
I1206 08:38:40.930004   41846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-181746/id_rsa Username:docker}
I1206 08:38:41.034249   41846 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-181746 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/library/minikube-local-cache-test │ functional-181746  │ sha256:5294eb │ 991B   │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kicbase/echo-server               │ functional-181746  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-181746 image ls --format table --alsologtostderr:
I1206 08:38:44.615322   42175 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:44.615474   42175 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:44.615486   42175 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:44.615492   42175 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:44.615898   42175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 08:38:44.617195   42175 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:44.617371   42175 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:44.618142   42175 cli_runner.go:164] Run: docker container inspect functional-181746 --format={{.State.Status}}
I1206 08:38:44.647491   42175 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:44.647541   42175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181746
I1206 08:38:44.678770   42175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-181746/id_rsa Username:docker}
I1206 08:38:44.787496   42175 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-181746 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDige
sts":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:5294eb1309299d240981eee230965d3e70b3f5d29d3eca33acb510d478dc7d4f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-181746"],"size":"991"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb
69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigest
s":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-181746","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"26793
9"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-181746 image ls --format json --alsologtostderr:
I1206 08:38:44.357447   42137 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:44.357570   42137 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:44.357583   42137 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:44.357589   42137 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:44.359133   42137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 08:38:44.360191   42137 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:44.360322   42137 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:44.361038   42137 cli_runner.go:164] Run: docker container inspect functional-181746 --format={{.State.Status}}
I1206 08:38:44.382384   42137 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:44.382448   42137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181746
I1206 08:38:44.403199   42137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-181746/id_rsa Username:docker}
I1206 08:38:44.509937   42137 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-181746 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-181746
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:5294eb1309299d240981eee230965d3e70b3f5d29d3eca33acb510d478dc7d4f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-181746
size: "991"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-181746 image ls --format yaml --alsologtostderr:
I1206 08:38:41.162668   41883 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:41.162859   41883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:41.162866   41883 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:41.162871   41883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:41.163224   41883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 08:38:41.164170   41883 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:41.164378   41883 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:41.164984   41883 cli_runner.go:164] Run: docker container inspect functional-181746 --format={{.State.Status}}
I1206 08:38:41.189478   41883 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:41.189534   41883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181746
I1206 08:38:41.213165   41883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-181746/id_rsa Username:docker}
I1206 08:38:41.326713   41883 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 ssh pgrep buildkitd: exit status 1 (351.816162ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image build -t localhost/my-image:functional-181746 testdata/build --alsologtostderr
2025/12/06 08:38:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-181746 image build -t localhost/my-image:functional-181746 testdata/build --alsologtostderr: (4.190048197s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-181746 image build -t localhost/my-image:functional-181746 testdata/build --alsologtostderr:
I1206 08:38:41.796474   41984 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:41.796712   41984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:41.796736   41984 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:41.796754   41984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:41.797017   41984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 08:38:41.797626   41984 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:41.799900   41984 config.go:182] Loaded profile config "functional-181746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 08:38:41.800477   41984 cli_runner.go:164] Run: docker container inspect functional-181746 --format={{.State.Status}}
I1206 08:38:41.818788   41984 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:41.818838   41984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181746
I1206 08:38:41.837295   41984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-181746/id_rsa Username:docker}
I1206 08:38:41.951828   41984 build_images.go:162] Building image from path: /tmp/build.1526308024.tar
I1206 08:38:41.951910   41984 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 08:38:41.967927   41984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1526308024.tar
I1206 08:38:41.974220   41984 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1526308024.tar: stat -c "%s %y" /var/lib/minikube/build/build.1526308024.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1526308024.tar': No such file or directory
I1206 08:38:41.974253   41984 ssh_runner.go:362] scp /tmp/build.1526308024.tar --> /var/lib/minikube/build/build.1526308024.tar (3072 bytes)
I1206 08:38:42.008699   41984 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1526308024
I1206 08:38:42.019573   41984 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1526308024 -xf /var/lib/minikube/build/build.1526308024.tar
I1206 08:38:42.029728   41984 containerd.go:394] Building image: /var/lib/minikube/build/build.1526308024
I1206 08:38:42.029876   41984 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1526308024 --local dockerfile=/var/lib/minikube/build/build.1526308024 --output type=image,name=localhost/my-image:functional-181746
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:8fba02c37add567fa73c9c63f009a7824272d0b91d88c7f0af4cbd37655f7b88
#8 exporting manifest sha256:8fba02c37add567fa73c9c63f009a7824272d0b91d88c7f0af4cbd37655f7b88 0.0s done
#8 exporting config sha256:4af639004c6670912b64f889ef7b8d26d14d7040c026280211941901da7c3e82 0.0s done
#8 naming to localhost/my-image:functional-181746 done
#8 DONE 0.2s
I1206 08:38:45.891923   41984 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1526308024 --local dockerfile=/var/lib/minikube/build/build.1526308024 --output type=image,name=localhost/my-image:functional-181746: (3.861998726s)
I1206 08:38:45.891984   41984 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1526308024
I1206 08:38:45.903962   41984 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1526308024.tar
I1206 08:38:45.918750   41984 build_images.go:218] Built localhost/my-image:functional-181746 from /tmp/build.1526308024.tar
I1206 08:38:45.918778   41984 build_images.go:134] succeeded building to: functional-181746
I1206 08:38:45.918783   41984 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-181746
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image load --daemon kicbase/echo-server:functional-181746 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-181746 image load --daemon kicbase/echo-server:functional-181746 --alsologtostderr: (1.176062684s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image load --daemon kicbase/echo-server:functional-181746 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-181746 image load --daemon kicbase/echo-server:functional-181746 --alsologtostderr: (1.161330541s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-181746
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image load --daemon kicbase/echo-server:functional-181746 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image save kicbase/echo-server:functional-181746 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image rm kicbase/echo-server:functional-181746 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-181746
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 image save --daemon kicbase/echo-server:functional-181746 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-181746
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-181746 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.181.96 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-181746 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdany-port2354432045/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765010288414683631" to /tmp/TestFunctionalparallelMountCmdany-port2354432045/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765010288414683631" to /tmp/TestFunctionalparallelMountCmdany-port2354432045/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765010288414683631" to /tmp/TestFunctionalparallelMountCmdany-port2354432045/001/test-1765010288414683631
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (455.881975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:38:08.872324    4292 retry.go:31] will retry after 548.962343ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 08:38 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 08:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 08:38 test-1765010288414683631
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh cat /mount-9p/test-1765010288414683631
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-181746 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8ab6e5ca-2a22-458c-8d48-742a6af6faab] Pending
helpers_test.go:352: "busybox-mount" [8ab6e5ca-2a22-458c-8d48-742a6af6faab] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8ab6e5ca-2a22-458c-8d48-742a6af6faab] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8ab6e5ca-2a22-458c-8d48-742a6af6faab] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00482996s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-181746 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdany-port2354432045/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdspecific-port1238858760/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (497.178654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:38:17.384798    4292 retry.go:31] will retry after 740.573939ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdspecific-port1238858760/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 ssh "sudo umount -f /mount-9p": exit status 1 (318.0341ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-181746 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdspecific-port1238858760/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3703545667/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3703545667/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3703545667/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T" /mount1: exit status 1 (597.23764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:38:19.878946    4292 retry.go:31] will retry after 514.566815ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-181746 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3703545667/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3703545667/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-181746 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3703545667/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-181746 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-181746 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-fhwth" [e0101597-825c-4c40-bb47-8d666f72673a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-fhwth" [e0101597-825c-4c40-bb47-8d666f72673a] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004342374s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "372.513552ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "65.472084ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "379.193841ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "68.014036ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 service list -o json
functional_test.go:1504: Took "590.475292ms" to run "out/minikube-linux-arm64 -p functional-181746 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30931
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-181746 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30931
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-181746
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-181746
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-181746
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-090986 cache add registry.k8s.io/pause:3.1: (1.134380878s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-090986 cache add registry.k8s.io/pause:3.3: (1.206791721s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-090986 cache add registry.k8s.io/pause:latest: (1.078501561s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2803410184/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 cache add minikube-local-cache-test:functional-090986
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 cache delete minikube-local-cache-test:functional-090986
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-090986
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.264754ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs306396320/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 config get cpus: exit status 14 (88.029905ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 config get cpus: exit status 14 (58.816511ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-090986 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-090986 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (187.02188ms)

                                                
                                                
-- stdout --
	* [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:07:41.544803   71607 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:07:41.544993   71607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.545021   71607 out.go:374] Setting ErrFile to fd 2...
	I1206 09:07:41.545042   71607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.545380   71607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:07:41.545800   71607 out.go:368] Setting JSON to false
	I1206 09:07:41.546708   71607 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3013,"bootTime":1765009049,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:07:41.546812   71607 start.go:143] virtualization:  
	I1206 09:07:41.551838   71607 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:07:41.554807   71607 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:07:41.554968   71607 notify.go:221] Checking for updates...
	I1206 09:07:41.560909   71607 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:07:41.563894   71607 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:07:41.566937   71607 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:07:41.570022   71607 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:07:41.572905   71607 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:07:41.576239   71607 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:41.576806   71607 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:07:41.600682   71607 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:07:41.600789   71607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:41.660442   71607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:41.650919433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:07:41.660550   71607 docker.go:319] overlay module found
	I1206 09:07:41.663645   71607 out.go:179] * Using the docker driver based on existing profile
	I1206 09:07:41.666513   71607 start.go:309] selected driver: docker
	I1206 09:07:41.666530   71607 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:41.666622   71607 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:07:41.670233   71607 out.go:203] 
	W1206 09:07:41.673086   71607 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 09:07:41.675922   71607 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-090986 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-090986 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-090986 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (189.79894ms)

                                                
                                                
-- stdout --
	* [functional-090986] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:07:41.355522   71559 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:07:41.355713   71559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.355727   71559 out.go:374] Setting ErrFile to fd 2...
	I1206 09:07:41.355733   71559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:41.356137   71559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:07:41.356634   71559 out.go:368] Setting JSON to false
	I1206 09:07:41.357482   71559 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3013,"bootTime":1765009049,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:07:41.357555   71559 start.go:143] virtualization:  
	I1206 09:07:41.361061   71559 out.go:179] * [functional-090986] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 09:07:41.363968   71559 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:07:41.364071   71559 notify.go:221] Checking for updates...
	I1206 09:07:41.370045   71559 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:07:41.372953   71559 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:07:41.375804   71559 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:07:41.378516   71559 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:07:41.381400   71559 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:07:41.384744   71559 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:07:41.385413   71559 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:07:41.414777   71559 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:07:41.414878   71559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:07:41.471221   71559 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:07:41.461605084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:07:41.471320   71559 docker.go:319] overlay module found
	I1206 09:07:41.476384   71559 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 09:07:41.479157   71559 start.go:309] selected driver: docker
	I1206 09:07:41.479175   71559 start.go:927] validating driver "docker" against &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:07:41.479274   71559 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:07:41.482905   71559 out.go:203] 
	W1206 09:07:41.485848   71559 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:07:41.488700   71559 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh -n functional-090986 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 cp functional-090986:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1959122984/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh -n functional-090986 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh -n functional-090986 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4292/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo cat /etc/test/nested/copy/4292/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4292.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo cat /etc/ssl/certs/4292.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4292.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo cat /usr/share/ca-certificates/4292.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/42922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo cat /etc/ssl/certs/42922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/42922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo cat /usr/share/ca-certificates/42922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 ssh "sudo systemctl is-active docker": exit status 1 (304.81125ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 ssh "sudo systemctl is-active crio": exit status 1 (275.778988ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-090986 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "335.259611ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "59.406723ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "327.59886ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "49.814671ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2232204122/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.183189ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:07:34.582516    4292 retry.go:31] will retry after 425.033848ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2232204122/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 ssh "sudo umount -f /mount-9p": exit status 1 (297.349148ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-090986 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2232204122/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T" /mount1: exit status 1 (560.230977ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:07:36.656934    4292 retry.go:31] will retry after 670.758075ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-090986 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-090986 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3792314433/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-090986 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-090986
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-090986
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-090986 image ls --format short --alsologtostderr:
I1206 09:07:54.402000   73786 out.go:360] Setting OutFile to fd 1 ...
I1206 09:07:54.402190   73786 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:54.402217   73786 out.go:374] Setting ErrFile to fd 2...
I1206 09:07:54.402238   73786 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:54.402528   73786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 09:07:54.403218   73786 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:54.403434   73786 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:54.403985   73786 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 09:07:54.421646   73786 ssh_runner.go:195] Run: systemctl --version
I1206 09:07:54.421709   73786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 09:07:54.439611   73786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 09:07:54.546631   73786 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-090986 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ localhost/my-image                          │ functional-090986  │ sha256:3c3638 │ 831kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kicbase/echo-server               │ functional-090986  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/library/minikube-local-cache-test │ functional-090986  │ sha256:5294eb │ 991B   │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-090986 image ls --format table --alsologtostderr:
I1206 09:07:58.745130   74176 out.go:360] Setting OutFile to fd 1 ...
I1206 09:07:58.745284   74176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:58.745318   74176 out.go:374] Setting ErrFile to fd 2...
I1206 09:07:58.745331   74176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:58.745603   74176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 09:07:58.746275   74176 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:58.746439   74176 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:58.747009   74176 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 09:07:58.764105   74176 ssh_runner.go:195] Run: systemctl --version
I1206 09:07:58.764156   74176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 09:07:58.784967   74176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 09:07:58.890041   74176 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-090986 image ls --format json --alsologtostderr:
[{"id":"sha256:3c36381cae33ea3bc2c2e6a6b4714894d0902b3179a0f06e465a22add810716c","repoDigests":[],"repoTags":["localhost/my-image:functional-090986"],"size":"830615"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-090986"],"size":"2173567"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c
1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:68b5f775f
18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:5294eb1309299d240981eee230965d3e70b3f5d29d3eca33acb510d478dc7d4f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-090986"],"size":"991"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:8057e050077
3a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-090986 image ls --format json --alsologtostderr:
I1206 09:07:58.518592   74141 out.go:360] Setting OutFile to fd 1 ...
I1206 09:07:58.518757   74141 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:58.518779   74141 out.go:374] Setting ErrFile to fd 2...
I1206 09:07:58.518811   74141 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:58.519203   74141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 09:07:58.520235   74141 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:58.520407   74141 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:58.520980   74141 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 09:07:58.537950   74141 ssh_runner.go:195] Run: systemctl --version
I1206 09:07:58.538005   74141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 09:07:58.557654   74141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 09:07:58.662013   74141 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-090986 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-090986
size: "2173567"
- id: sha256:5294eb1309299d240981eee230965d3e70b3f5d29d3eca33acb510d478dc7d4f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-090986
size: "991"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-090986 image ls --format yaml --alsologtostderr:
I1206 09:07:54.629460   73829 out.go:360] Setting OutFile to fd 1 ...
I1206 09:07:54.629617   73829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:54.629643   73829 out.go:374] Setting ErrFile to fd 2...
I1206 09:07:54.629661   73829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:54.629952   73829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 09:07:54.630605   73829 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:54.630774   73829 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:54.631409   73829 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 09:07:54.649351   73829 ssh_runner.go:195] Run: systemctl --version
I1206 09:07:54.649409   73829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 09:07:54.669901   73829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 09:07:54.774214   73829 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-090986 ssh pgrep buildkitd: exit status 1 (259.26048ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image build -t localhost/my-image:functional-090986 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-090986 image build -t localhost/my-image:functional-090986 testdata/build --alsologtostderr: (3.138758988s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-090986 image build -t localhost/my-image:functional-090986 testdata/build --alsologtostderr:
I1206 09:07:55.119894   73929 out.go:360] Setting OutFile to fd 1 ...
I1206 09:07:55.120070   73929 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:55.120102   73929 out.go:374] Setting ErrFile to fd 2...
I1206 09:07:55.120126   73929 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:07:55.120449   73929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 09:07:55.121112   73929 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:55.121838   73929 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:07:55.122426   73929 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 09:07:55.139932   73929 ssh_runner.go:195] Run: systemctl --version
I1206 09:07:55.139996   73929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 09:07:55.158186   73929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 09:07:55.262449   73929 build_images.go:162] Building image from path: /tmp/build.337011259.tar
I1206 09:07:55.262525   73929 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 09:07:55.273049   73929 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.337011259.tar
I1206 09:07:55.280629   73929 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.337011259.tar: stat -c "%s %y" /var/lib/minikube/build/build.337011259.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.337011259.tar': No such file or directory
I1206 09:07:55.280661   73929 ssh_runner.go:362] scp /tmp/build.337011259.tar --> /var/lib/minikube/build/build.337011259.tar (3072 bytes)
I1206 09:07:55.298749   73929 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.337011259
I1206 09:07:55.306804   73929 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.337011259 -xf /var/lib/minikube/build/build.337011259.tar
I1206 09:07:55.314989   73929 containerd.go:394] Building image: /var/lib/minikube/build/build.337011259
I1206 09:07:55.315063   73929 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.337011259 --local dockerfile=/var/lib/minikube/build/build.337011259 --output type=image,name=localhost/my-image:functional-090986
#1 [internal] load build definition from Dockerfile
#1 DONE 0.0s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:0f9cafc1b912339d84b0b326bb9cfed2242b82de8e343b2533a114ae319e9d78 0.0s done
#8 exporting config sha256:3c36381cae33ea3bc2c2e6a6b4714894d0902b3179a0f06e465a22add810716c 0.0s done
#8 naming to localhost/my-image:functional-090986 done
#8 DONE 0.2s
I1206 09:07:58.186919   73929 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.337011259 --local dockerfile=/var/lib/minikube/build/build.337011259 --output type=image,name=localhost/my-image:functional-090986: (2.871818941s)
I1206 09:07:58.186996   73929 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.337011259
I1206 09:07:58.195123   73929 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.337011259.tar
I1206 09:07:58.204557   73929 build_images.go:218] Built localhost/my-image:functional-090986 from /tmp/build.337011259.tar
I1206 09:07:58.204598   73929 build_images.go:134] succeeded building to: functional-090986
I1206 09:07:58.204603   73929 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-090986
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image load --daemon kicbase/echo-server:functional-090986 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-090986 image load --daemon kicbase/echo-server:functional-090986 --alsologtostderr: (1.05577652s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image load --daemon kicbase/echo-server:functional-090986 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-090986
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image load --daemon kicbase/echo-server:functional-090986 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image save kicbase/echo-server:functional-090986 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image rm kicbase/echo-server:functional-090986 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-090986
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 image save --daemon kicbase/echo-server:functional-090986 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-090986
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-090986 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-090986
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-090986
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-090986
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (153.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1206 09:10:55.754534    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:10:55.760899    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:10:55.772292    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:10:55.793653    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:10:55.835050    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:10:55.916397    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:10:56.078104    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:10:56.399713    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:10:57.041725    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:10:58.323230    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:11:00.884543    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:11:06.006652    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:11:16.248413    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:11:36.062664    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:11:36.729813    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:17.692054    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m32.199075377s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (153.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 kubectl -- rollout status deployment/busybox: (4.832784176s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-cr7fr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-f4k7k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-ktqsd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-cr7fr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-f4k7k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-ktqsd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-cr7fr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-f4k7k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-ktqsd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-cr7fr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-cr7fr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-f4k7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-f4k7k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-ktqsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 kubectl -- exec busybox-7b57f96db7-ktqsd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 node add --alsologtostderr -v 5
E1206 09:12:57.331840    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:13:39.614167    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 node add --alsologtostderr -v 5: (58.335129711s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5: (1.134830977s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-158126 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.148196887s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 status --output json --alsologtostderr -v 5: (1.247902441s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp testdata/cp-test.txt ha-158126:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile24686231/001/cp-test_ha-158126.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126:/home/docker/cp-test.txt ha-158126-m02:/home/docker/cp-test_ha-158126_ha-158126-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m02 "sudo cat /home/docker/cp-test_ha-158126_ha-158126-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126:/home/docker/cp-test.txt ha-158126-m03:/home/docker/cp-test_ha-158126_ha-158126-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m03 "sudo cat /home/docker/cp-test_ha-158126_ha-158126-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126:/home/docker/cp-test.txt ha-158126-m04:/home/docker/cp-test_ha-158126_ha-158126-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m04 "sudo cat /home/docker/cp-test_ha-158126_ha-158126-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp testdata/cp-test.txt ha-158126-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile24686231/001/cp-test_ha-158126-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m02:/home/docker/cp-test.txt ha-158126:/home/docker/cp-test_ha-158126-m02_ha-158126.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126 "sudo cat /home/docker/cp-test_ha-158126-m02_ha-158126.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m02:/home/docker/cp-test.txt ha-158126-m03:/home/docker/cp-test_ha-158126-m02_ha-158126-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m03 "sudo cat /home/docker/cp-test_ha-158126-m02_ha-158126-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m02:/home/docker/cp-test.txt ha-158126-m04:/home/docker/cp-test_ha-158126-m02_ha-158126-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m04 "sudo cat /home/docker/cp-test_ha-158126-m02_ha-158126-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp testdata/cp-test.txt ha-158126-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile24686231/001/cp-test_ha-158126-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m03:/home/docker/cp-test.txt ha-158126:/home/docker/cp-test_ha-158126-m03_ha-158126.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126 "sudo cat /home/docker/cp-test_ha-158126-m03_ha-158126.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m03:/home/docker/cp-test.txt ha-158126-m02:/home/docker/cp-test_ha-158126-m03_ha-158126-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m02 "sudo cat /home/docker/cp-test_ha-158126-m03_ha-158126-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m03:/home/docker/cp-test.txt ha-158126-m04:/home/docker/cp-test_ha-158126-m03_ha-158126-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m04 "sudo cat /home/docker/cp-test_ha-158126-m03_ha-158126-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp testdata/cp-test.txt ha-158126-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile24686231/001/cp-test_ha-158126-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m04:/home/docker/cp-test.txt ha-158126:/home/docker/cp-test_ha-158126-m04_ha-158126.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126 "sudo cat /home/docker/cp-test_ha-158126-m04_ha-158126.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m04:/home/docker/cp-test.txt ha-158126-m02:/home/docker/cp-test_ha-158126-m04_ha-158126-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m02 "sudo cat /home/docker/cp-test_ha-158126-m04_ha-158126-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 cp ha-158126-m04:/home/docker/cp-test.txt ha-158126-m03:/home/docker/cp-test_ha-158126-m04_ha-158126-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 ssh -n ha-158126-m03 "sudo cat /home/docker/cp-test_ha-158126-m04_ha-158126-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 node stop m02 --alsologtostderr -v 5: (12.217613569s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5: exit status 7 (804.960076ms)

                                                
                                                
-- stdout --
	ha-158126
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158126-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-158126-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158126-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:14:17.586210   91798 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:14:17.586330   91798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:17.586341   91798 out.go:374] Setting ErrFile to fd 2...
	I1206 09:14:17.586346   91798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:14:17.586720   91798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:14:17.587046   91798 out.go:368] Setting JSON to false
	I1206 09:14:17.587076   91798 mustload.go:66] Loading cluster: ha-158126
	I1206 09:14:17.587806   91798 config.go:182] Loaded profile config "ha-158126": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:14:17.587826   91798 status.go:174] checking status of ha-158126 ...
	I1206 09:14:17.588598   91798 cli_runner.go:164] Run: docker container inspect ha-158126 --format={{.State.Status}}
	I1206 09:14:17.589502   91798 notify.go:221] Checking for updates...
	I1206 09:14:17.607827   91798 status.go:371] ha-158126 host status = "Running" (err=<nil>)
	I1206 09:14:17.607852   91798 host.go:66] Checking if "ha-158126" exists ...
	I1206 09:14:17.608142   91798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-158126
	I1206 09:14:17.639357   91798 host.go:66] Checking if "ha-158126" exists ...
	I1206 09:14:17.640299   91798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:14:17.642142   91798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-158126
	I1206 09:14:17.667584   91798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/ha-158126/id_rsa Username:docker}
	I1206 09:14:17.781015   91798 ssh_runner.go:195] Run: systemctl --version
	I1206 09:14:17.788228   91798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:14:17.803956   91798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:14:17.870943   91798 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-06 09:14:17.860837134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:14:17.871671   91798 kubeconfig.go:125] found "ha-158126" server: "https://192.168.49.254:8443"
	I1206 09:14:17.871708   91798 api_server.go:166] Checking apiserver status ...
	I1206 09:14:17.871758   91798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:14:17.886834   91798 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	I1206 09:14:17.895442   91798 api_server.go:182] apiserver freezer: "5:freezer:/docker/bf58d889adac2c84a94bb5ccd536cf2749ebbaeea09943df00d87cbdda3c02d8/kubepods/burstable/pod702980f393f41dc8f16d5834234b6252/9e3e9b195a7205c6f0914321f84a6906214be2396d0c4fe62ae832de50401599"
	I1206 09:14:17.895518   91798 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bf58d889adac2c84a94bb5ccd536cf2749ebbaeea09943df00d87cbdda3c02d8/kubepods/burstable/pod702980f393f41dc8f16d5834234b6252/9e3e9b195a7205c6f0914321f84a6906214be2396d0c4fe62ae832de50401599/freezer.state
	I1206 09:14:17.904520   91798 api_server.go:204] freezer state: "THAWED"
	I1206 09:14:17.904550   91798 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 09:14:17.913183   91798 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 09:14:17.913209   91798 status.go:463] ha-158126 apiserver status = Running (err=<nil>)
	I1206 09:14:17.913220   91798 status.go:176] ha-158126 status: &{Name:ha-158126 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:14:17.913253   91798 status.go:174] checking status of ha-158126-m02 ...
	I1206 09:14:17.913572   91798 cli_runner.go:164] Run: docker container inspect ha-158126-m02 --format={{.State.Status}}
	I1206 09:14:17.932193   91798 status.go:371] ha-158126-m02 host status = "Stopped" (err=<nil>)
	I1206 09:14:17.932218   91798 status.go:384] host is not running, skipping remaining checks
	I1206 09:14:17.932226   91798 status.go:176] ha-158126-m02 status: &{Name:ha-158126-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:14:17.932247   91798 status.go:174] checking status of ha-158126-m03 ...
	I1206 09:14:17.932579   91798 cli_runner.go:164] Run: docker container inspect ha-158126-m03 --format={{.State.Status}}
	I1206 09:14:17.950276   91798 status.go:371] ha-158126-m03 host status = "Running" (err=<nil>)
	I1206 09:14:17.950318   91798 host.go:66] Checking if "ha-158126-m03" exists ...
	I1206 09:14:17.950617   91798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-158126-m03
	I1206 09:14:17.969656   91798 host.go:66] Checking if "ha-158126-m03" exists ...
	I1206 09:14:17.969968   91798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:14:17.970013   91798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-158126-m03
	I1206 09:14:17.988606   91798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/ha-158126-m03/id_rsa Username:docker}
	I1206 09:14:18.102488   91798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:14:18.116539   91798 kubeconfig.go:125] found "ha-158126" server: "https://192.168.49.254:8443"
	I1206 09:14:18.116575   91798 api_server.go:166] Checking apiserver status ...
	I1206 09:14:18.116641   91798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:14:18.131833   91798 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	I1206 09:14:18.143951   91798 api_server.go:182] apiserver freezer: "5:freezer:/docker/f61990864a5706cb4b133a95df3dd2136e1dfe37cd90c08bce5fc605b23139ad/kubepods/burstable/pod5b7bef89a439a899447bce3fcfdc1de6/9f5801585ecc912168db2365f16a33acfc5bc59365f99efa66011f72a113796a"
	I1206 09:14:18.144022   91798 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f61990864a5706cb4b133a95df3dd2136e1dfe37cd90c08bce5fc605b23139ad/kubepods/burstable/pod5b7bef89a439a899447bce3fcfdc1de6/9f5801585ecc912168db2365f16a33acfc5bc59365f99efa66011f72a113796a/freezer.state
	I1206 09:14:18.152111   91798 api_server.go:204] freezer state: "THAWED"
	I1206 09:14:18.152191   91798 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 09:14:18.160605   91798 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 09:14:18.160636   91798 status.go:463] ha-158126-m03 apiserver status = Running (err=<nil>)
	I1206 09:14:18.160645   91798 status.go:176] ha-158126-m03 status: &{Name:ha-158126-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:14:18.160685   91798 status.go:174] checking status of ha-158126-m04 ...
	I1206 09:14:18.161014   91798 cli_runner.go:164] Run: docker container inspect ha-158126-m04 --format={{.State.Status}}
	I1206 09:14:18.180589   91798 status.go:371] ha-158126-m04 host status = "Running" (err=<nil>)
	I1206 09:14:18.180640   91798 host.go:66] Checking if "ha-158126-m04" exists ...
	I1206 09:14:18.181096   91798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-158126-m04
	I1206 09:14:18.199362   91798 host.go:66] Checking if "ha-158126-m04" exists ...
	I1206 09:14:18.199749   91798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:14:18.199800   91798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-158126-m04
	I1206 09:14:18.217861   91798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/ha-158126-m04/id_rsa Username:docker}
	I1206 09:14:18.320874   91798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:14:18.335051   91798 status.go:176] ha-158126-m04 status: &{Name:ha-158126-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 node start m02 --alsologtostderr -v 5: (12.795981031s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5: (1.145496818s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.116289642s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 stop --alsologtostderr -v 5: (37.777567605s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 start --wait true --alsologtostderr -v 5
E1206 09:15:55.754502    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:16:00.398736    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 start --wait true --alsologtostderr -v 5: (1m0.501268544s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 node delete m03 --alsologtostderr -v 5: (10.148764787s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5
E1206 09:16:23.456619    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 stop --alsologtostderr -v 5
E1206 09:16:36.061716    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 stop --alsologtostderr -v 5: (36.267941784s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5: exit status 7 (163.253283ms)

                                                
                                                
-- stdout --
	ha-158126
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-158126-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-158126-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:17:01.088513  106530 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:17:01.089249  106530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:17:01.089273  106530 out.go:374] Setting ErrFile to fd 2...
	I1206 09:17:01.089281  106530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:17:01.089622  106530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:17:01.089853  106530 out.go:368] Setting JSON to false
	I1206 09:17:01.089888  106530 mustload.go:66] Loading cluster: ha-158126
	I1206 09:17:01.090023  106530 notify.go:221] Checking for updates...
	I1206 09:17:01.090399  106530 config.go:182] Loaded profile config "ha-158126": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:17:01.090424  106530 status.go:174] checking status of ha-158126 ...
	I1206 09:17:01.090983  106530 cli_runner.go:164] Run: docker container inspect ha-158126 --format={{.State.Status}}
	I1206 09:17:01.110547  106530 status.go:371] ha-158126 host status = "Stopped" (err=<nil>)
	I1206 09:17:01.110574  106530 status.go:384] host is not running, skipping remaining checks
	I1206 09:17:01.110581  106530 status.go:176] ha-158126 status: &{Name:ha-158126 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:17:01.110620  106530 status.go:174] checking status of ha-158126-m02 ...
	I1206 09:17:01.110967  106530 cli_runner.go:164] Run: docker container inspect ha-158126-m02 --format={{.State.Status}}
	I1206 09:17:01.133314  106530 status.go:371] ha-158126-m02 host status = "Stopped" (err=<nil>)
	I1206 09:17:01.133341  106530 status.go:384] host is not running, skipping remaining checks
	I1206 09:17:01.133360  106530 status.go:176] ha-158126-m02 status: &{Name:ha-158126-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:17:01.133388  106530 status.go:174] checking status of ha-158126-m04 ...
	I1206 09:17:01.133758  106530 cli_runner.go:164] Run: docker container inspect ha-158126-m04 --format={{.State.Status}}
	I1206 09:17:01.188787  106530 status.go:371] ha-158126-m04 host status = "Stopped" (err=<nil>)
	I1206 09:17:01.188813  106530 status.go:384] host is not running, skipping remaining checks
	I1206 09:17:01.188821  106530 status.go:176] ha-158126-m04 status: &{Name:ha-158126-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (63.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1206 09:17:57.330870    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m2.189407534s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (63.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (64.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 node add --control-plane --alsologtostderr -v 5: (1m2.890810795s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-158126 status --alsologtostderr -v 5: (1.209059587s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (64.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.146723622s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.15s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-845524 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-845524 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (47.384689081s)
--- PASS: TestJSONOutput/start/Command (47.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-845524 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-845524 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-845524 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-845524 --output=json --user=testUser: (6.048868811s)
--- PASS: TestJSONOutput/stop/Command (6.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-225935 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-225935 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (120.418919ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d0a78e11-b53b-47a0-944f-4f3c8cc53432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-225935] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5dbb4a00-724a-4e76-962d-fe05c36b22d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22049"}}
	{"specversion":"1.0","id":"9451466b-43d4-44d2-9a8c-ac481825ef15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6179d177-576d-478d-85bf-d1d39e70f4a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig"}}
	{"specversion":"1.0","id":"0a095c69-605d-43cd-ba06-eebe292c0d8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube"}}
	{"specversion":"1.0","id":"a0a602a2-ea82-43b0-afa8-4ecce6250ed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"68c4803b-71c0-4f29-b3a4-c386294c90e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"12f91e9a-637b-47cf-9566-83b886a9eb72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-225935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-225935
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-314238 --network=
E1206 09:20:55.755047    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-314238 --network=: (36.904828502s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-314238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-314238
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-314238: (2.271634974s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-003421 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-003421 --network=bridge: (33.086800436s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-003421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-003421
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-003421: (2.129800476s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.24s)

                                                
                                    
x
+
TestKicExistingNetwork (35.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1206 09:21:34.668870    4292 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1206 09:21:34.684006    4292 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1206 09:21:34.684081    4292 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1206 09:21:34.684097    4292 cli_runner.go:164] Run: docker network inspect existing-network
W1206 09:21:34.699572    4292 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1206 09:21:34.699605    4292 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1206 09:21:34.699619    4292 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1206 09:21:34.699741    4292 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 09:21:34.716765    4292 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ece93e0bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8b:9b:7f:59:f5} reservation:<nil>}
I1206 09:21:34.717107    4292 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cc9a90}
I1206 09:21:34.717129    4292 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1206 09:21:34.717178    4292 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1206 09:21:34.777724    4292 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-751126 --network=existing-network
E1206 09:21:36.062054    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-751126 --network=existing-network: (33.355222528s)
helpers_test.go:175: Cleaning up "existing-network-751126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-751126
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-751126: (2.159765271s)
I1206 09:22:10.309537    4292 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.66s)

                                                
                                    
x
+
TestKicCustomSubnet (38.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-965461 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-965461 --subnet=192.168.60.0/24: (36.141861343s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-965461 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-965461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-965461
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-965461: (2.268183765s)
--- PASS: TestKicCustomSubnet (38.44s)

                                                
                                    
x
+
TestKicStaticIP (36.07s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-521761 --static-ip=192.168.200.200
E1206 09:22:57.331453    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-521761 --static-ip=192.168.200.200: (33.707906664s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-521761 ip
helpers_test.go:175: Cleaning up "static-ip-521761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-521761
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-521761: (2.19947677s)
--- PASS: TestKicStaticIP (36.07s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-295301 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-295301 --driver=docker  --container-runtime=containerd: (31.327800568s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-298181 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-298181 --driver=docker  --container-runtime=containerd: (32.899642083s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-295301
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-298181
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-298181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-298181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-298181: (2.160389876s)
helpers_test.go:175: Cleaning up "first-295301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-295301
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-295301: (2.346805709s)
--- PASS: TestMinikubeProfile (70.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-675086 --memory=3072 --mount-string /tmp/TestMountStartserial3769433564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-675086 --memory=3072 --mount-string /tmp/TestMountStartserial3769433564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.777539434s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-675086 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-677055 --memory=3072 --mount-string /tmp/TestMountStartserial3769433564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-677055 --memory=3072 --mount-string /tmp/TestMountStartserial3769433564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.283589494s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-677055 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-675086 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-675086 --alsologtostderr -v=5: (1.717201712s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-677055 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-677055
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-677055: (1.297053086s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-677055
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-677055: (7.052583266s)
--- PASS: TestMountStart/serial/RestartStopped (8.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-677055 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-044975 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1206 09:25:55.755119    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:26:19.141259    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:26:36.062244    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-044975 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.554391239s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-044975 -- rollout status deployment/busybox: (3.773553196s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-dhrlh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-vrzpc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-dhrlh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-vrzpc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-dhrlh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-vrzpc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-dhrlh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-dhrlh -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-vrzpc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-044975 -- exec busybox-7b57f96db7-vrzpc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-044975 -v=5 --alsologtostderr
E1206 09:27:18.818680    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-044975 -v=5 --alsologtostderr: (27.54065457s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-044975 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp testdata/cp-test.txt multinode-044975:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp multinode-044975:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3999165178/001/cp-test_multinode-044975.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp multinode-044975:/home/docker/cp-test.txt multinode-044975-m02:/home/docker/cp-test_multinode-044975_multinode-044975-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m02 "sudo cat /home/docker/cp-test_multinode-044975_multinode-044975-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp multinode-044975:/home/docker/cp-test.txt multinode-044975-m03:/home/docker/cp-test_multinode-044975_multinode-044975-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m03 "sudo cat /home/docker/cp-test_multinode-044975_multinode-044975-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp testdata/cp-test.txt multinode-044975-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp multinode-044975-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3999165178/001/cp-test_multinode-044975-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp multinode-044975-m02:/home/docker/cp-test.txt multinode-044975:/home/docker/cp-test_multinode-044975-m02_multinode-044975.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975 "sudo cat /home/docker/cp-test_multinode-044975-m02_multinode-044975.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp multinode-044975-m02:/home/docker/cp-test.txt multinode-044975-m03:/home/docker/cp-test_multinode-044975-m02_multinode-044975-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m03 "sudo cat /home/docker/cp-test_multinode-044975-m02_multinode-044975-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp testdata/cp-test.txt multinode-044975-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp multinode-044975-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3999165178/001/cp-test_multinode-044975-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp multinode-044975-m03:/home/docker/cp-test.txt multinode-044975:/home/docker/cp-test_multinode-044975-m03_multinode-044975.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975 "sudo cat /home/docker/cp-test_multinode-044975-m03_multinode-044975.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 cp multinode-044975-m03:/home/docker/cp-test.txt multinode-044975-m02:/home/docker/cp-test_multinode-044975-m03_multinode-044975-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 ssh -n multinode-044975-m02 "sudo cat /home/docker/cp-test_multinode-044975-m03_multinode-044975-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-044975 node stop m03: (1.314729008s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-044975 status: exit status 7 (555.148676ms)

                                                
                                                
-- stdout --
	multinode-044975
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-044975-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-044975-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-044975 status --alsologtostderr: exit status 7 (549.635388ms)

                                                
                                                
-- stdout --
	multinode-044975
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-044975-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-044975-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:27:43.180279  160141 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:43.180478  160141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:43.180511  160141 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:43.180535  160141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:43.181012  160141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:27:43.181306  160141 out.go:368] Setting JSON to false
	I1206 09:27:43.181380  160141 mustload.go:66] Loading cluster: multinode-044975
	I1206 09:27:43.182179  160141 config.go:182] Loaded profile config "multinode-044975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:27:43.182226  160141 status.go:174] checking status of multinode-044975 ...
	I1206 09:27:43.183115  160141 cli_runner.go:164] Run: docker container inspect multinode-044975 --format={{.State.Status}}
	I1206 09:27:43.183625  160141 notify.go:221] Checking for updates...
	I1206 09:27:43.204927  160141 status.go:371] multinode-044975 host status = "Running" (err=<nil>)
	I1206 09:27:43.204954  160141 host.go:66] Checking if "multinode-044975" exists ...
	I1206 09:27:43.205308  160141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-044975
	I1206 09:27:43.236763  160141 host.go:66] Checking if "multinode-044975" exists ...
	I1206 09:27:43.237064  160141 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:27:43.237109  160141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-044975
	I1206 09:27:43.254797  160141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/multinode-044975/id_rsa Username:docker}
	I1206 09:27:43.360700  160141 ssh_runner.go:195] Run: systemctl --version
	I1206 09:27:43.367466  160141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:27:43.380988  160141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:43.440293  160141 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 09:27:43.43101574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:27:43.440829  160141 kubeconfig.go:125] found "multinode-044975" server: "https://192.168.67.2:8443"
	I1206 09:27:43.440864  160141 api_server.go:166] Checking apiserver status ...
	I1206 09:27:43.440908  160141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:27:43.453878  160141 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	I1206 09:27:43.462386  160141 api_server.go:182] apiserver freezer: "5:freezer:/docker/fb67f3fcda8a2e0cf67664ffe4afd8de7d4b50e9685117bd3c0ee246c995b1b3/kubepods/burstable/pod1bc7d406fbeac88383830f134c0c9cfb/1c4dc697b43e7aa272a55de3bdbcbfafdca2aac95832ca79b8e3f31c32953044"
	I1206 09:27:43.462467  160141 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fb67f3fcda8a2e0cf67664ffe4afd8de7d4b50e9685117bd3c0ee246c995b1b3/kubepods/burstable/pod1bc7d406fbeac88383830f134c0c9cfb/1c4dc697b43e7aa272a55de3bdbcbfafdca2aac95832ca79b8e3f31c32953044/freezer.state
	I1206 09:27:43.470596  160141 api_server.go:204] freezer state: "THAWED"
	I1206 09:27:43.470625  160141 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1206 09:27:43.478925  160141 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1206 09:27:43.478951  160141 status.go:463] multinode-044975 apiserver status = Running (err=<nil>)
	I1206 09:27:43.478963  160141 status.go:176] multinode-044975 status: &{Name:multinode-044975 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:27:43.478990  160141 status.go:174] checking status of multinode-044975-m02 ...
	I1206 09:27:43.479301  160141 cli_runner.go:164] Run: docker container inspect multinode-044975-m02 --format={{.State.Status}}
	I1206 09:27:43.496410  160141 status.go:371] multinode-044975-m02 host status = "Running" (err=<nil>)
	I1206 09:27:43.496444  160141 host.go:66] Checking if "multinode-044975-m02" exists ...
	I1206 09:27:43.496743  160141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-044975-m02
	I1206 09:27:43.514162  160141 host.go:66] Checking if "multinode-044975-m02" exists ...
	I1206 09:27:43.514478  160141 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:27:43.514537  160141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-044975-m02
	I1206 09:27:43.532207  160141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/multinode-044975-m02/id_rsa Username:docker}
	I1206 09:27:43.640863  160141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:27:43.654309  160141 status.go:176] multinode-044975-m02 status: &{Name:multinode-044975-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:27:43.654348  160141 status.go:174] checking status of multinode-044975-m03 ...
	I1206 09:27:43.654788  160141 cli_runner.go:164] Run: docker container inspect multinode-044975-m03 --format={{.State.Status}}
	I1206 09:27:43.673299  160141 status.go:371] multinode-044975-m03 host status = "Stopped" (err=<nil>)
	I1206 09:27:43.673326  160141 status.go:384] host is not running, skipping remaining checks
	I1206 09:27:43.673334  160141 status.go:176] multinode-044975-m03 status: &{Name:multinode-044975-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-044975 node start m03 -v=5 --alsologtostderr: (6.831957413s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-044975
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-044975
E1206 09:27:57.331454    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-044975: (25.145944238s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-044975 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-044975 --wait=true -v=5 --alsologtostderr: (47.71647578s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-044975
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-044975 node delete m03: (4.968208896s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-044975 stop: (23.973040255s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-044975 status: exit status 7 (103.613769ms)

                                                
                                                
-- stdout --
	multinode-044975
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-044975-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-044975 status --alsologtostderr: exit status 7 (105.135038ms)

                                                
                                                
-- stdout --
	multinode-044975
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-044975-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:29:34.171551  168925 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:29:34.171724  168925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:29:34.171738  168925 out.go:374] Setting ErrFile to fd 2...
	I1206 09:29:34.171744  168925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:29:34.172009  168925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:29:34.172196  168925 out.go:368] Setting JSON to false
	I1206 09:29:34.172223  168925 mustload.go:66] Loading cluster: multinode-044975
	I1206 09:29:34.172367  168925 notify.go:221] Checking for updates...
	I1206 09:29:34.172639  168925 config.go:182] Loaded profile config "multinode-044975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:29:34.172661  168925 status.go:174] checking status of multinode-044975 ...
	I1206 09:29:34.173156  168925 cli_runner.go:164] Run: docker container inspect multinode-044975 --format={{.State.Status}}
	I1206 09:29:34.193700  168925 status.go:371] multinode-044975 host status = "Stopped" (err=<nil>)
	I1206 09:29:34.193720  168925 status.go:384] host is not running, skipping remaining checks
	I1206 09:29:34.193727  168925 status.go:176] multinode-044975 status: &{Name:multinode-044975 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:29:34.193752  168925 status.go:174] checking status of multinode-044975-m02 ...
	I1206 09:29:34.194111  168925 cli_runner.go:164] Run: docker container inspect multinode-044975-m02 --format={{.State.Status}}
	I1206 09:29:34.228303  168925 status.go:371] multinode-044975-m02 host status = "Stopped" (err=<nil>)
	I1206 09:29:34.228327  168925 status.go:384] host is not running, skipping remaining checks
	I1206 09:29:34.228334  168925 status.go:176] multinode-044975-m02 status: &{Name:multinode-044975-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-044975 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-044975 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.302147737s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-044975 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-044975
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-044975-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-044975-m02 --driver=docker  --container-runtime=containerd: exit status 14 (93.030028ms)

                                                
                                                
-- stdout --
	* [multinode-044975-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-044975-m02' is duplicated with machine name 'multinode-044975-m02' in profile 'multinode-044975'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-044975-m03 --driver=docker  --container-runtime=containerd
E1206 09:30:55.759975    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-044975-m03 --driver=docker  --container-runtime=containerd: (33.401250969s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-044975
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-044975: exit status 80 (667.581313ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-044975 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-044975-m03 already exists in multinode-044975-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-044975-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-044975-m03: (2.14201312s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.35s)

                                                
                                    
x
+
TestPreload (116.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-569643 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1206 09:31:36.062043    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-569643 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (59.388954631s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-569643 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-569643 image pull gcr.io/k8s-minikube/busybox: (2.368936666s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-569643
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-569643: (1.378744813s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-569643 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1206 09:32:40.400132    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:57.331254    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-569643 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (50.656203002s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-569643 image list
helpers_test.go:175: Cleaning up "test-preload-569643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-569643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-569643: (2.420346135s)
--- PASS: TestPreload (116.45s)

                                                
                                    
x
+
TestScheduledStopUnix (109.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-605211 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-605211 --memory=3072 --driver=docker  --container-runtime=containerd: (32.851515406s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-605211 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:33:36.321497  184776 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:33:36.321661  184776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:36.321669  184776 out.go:374] Setting ErrFile to fd 2...
	I1206 09:33:36.321674  184776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:36.321968  184776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:33:36.322231  184776 out.go:368] Setting JSON to false
	I1206 09:33:36.322356  184776 mustload.go:66] Loading cluster: scheduled-stop-605211
	I1206 09:33:36.322754  184776 config.go:182] Loaded profile config "scheduled-stop-605211": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:33:36.322836  184776 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/config.json ...
	I1206 09:33:36.323037  184776 mustload.go:66] Loading cluster: scheduled-stop-605211
	I1206 09:33:36.323164  184776 config.go:182] Loaded profile config "scheduled-stop-605211": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-605211 -n scheduled-stop-605211
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-605211 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:33:36.746297  184867 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:33:36.746409  184867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:36.746419  184867 out.go:374] Setting ErrFile to fd 2...
	I1206 09:33:36.746425  184867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:36.746789  184867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:33:36.747103  184867 out.go:368] Setting JSON to false
	I1206 09:33:36.747520  184867 daemonize_unix.go:73] killing process 184794 as it is an old scheduled stop
	I1206 09:33:36.751837  184867 mustload.go:66] Loading cluster: scheduled-stop-605211
	I1206 09:33:36.752406  184867 config.go:182] Loaded profile config "scheduled-stop-605211": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:33:36.752565  184867 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/config.json ...
	I1206 09:33:36.752803  184867 mustload.go:66] Loading cluster: scheduled-stop-605211
	I1206 09:33:36.753023  184867 config.go:182] Loaded profile config "scheduled-stop-605211": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1206 09:33:36.756650    4292 retry.go:31] will retry after 57.2µs: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.757784    4292 retry.go:31] will retry after 148.667µs: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.758950    4292 retry.go:31] will retry after 248.484µs: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.760826    4292 retry.go:31] will retry after 371.075µs: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.761987    4292 retry.go:31] will retry after 377.128µs: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.763101    4292 retry.go:31] will retry after 1.018733ms: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.764227    4292 retry.go:31] will retry after 952.951µs: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.765385    4292 retry.go:31] will retry after 872.895µs: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.766472    4292 retry.go:31] will retry after 3.216016ms: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.770689    4292 retry.go:31] will retry after 4.257226ms: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.775926    4292 retry.go:31] will retry after 3.073434ms: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.779138    4292 retry.go:31] will retry after 10.165563ms: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.790360    4292 retry.go:31] will retry after 17.316916ms: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.810160    4292 retry.go:31] will retry after 20.879961ms: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.831432    4292 retry.go:31] will retry after 17.426861ms: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
I1206 09:33:36.849698    4292 retry.go:31] will retry after 33.090403ms: open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-605211 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-605211 -n scheduled-stop-605211
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-605211
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-605211 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:34:02.734452  185544 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:34:02.734690  185544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:34:02.734722  185544 out.go:374] Setting ErrFile to fd 2...
	I1206 09:34:02.734742  185544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:34:02.735014  185544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:34:02.735300  185544 out.go:368] Setting JSON to false
	I1206 09:34:02.735475  185544 mustload.go:66] Loading cluster: scheduled-stop-605211
	I1206 09:34:02.735903  185544 config.go:182] Loaded profile config "scheduled-stop-605211": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:34:02.736007  185544 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/scheduled-stop-605211/config.json ...
	I1206 09:34:02.736221  185544 mustload.go:66] Loading cluster: scheduled-stop-605211
	I1206 09:34:02.736374  185544 config.go:182] Loaded profile config "scheduled-stop-605211": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-605211
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-605211: exit status 7 (75.672417ms)

                                                
                                                
-- stdout --
	scheduled-stop-605211
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-605211 -n scheduled-stop-605211
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-605211 -n scheduled-stop-605211: exit status 7 (76.624631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-605211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-605211
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-605211: (4.742724772s)
--- PASS: TestScheduledStopUnix (109.22s)

                                                
                                    
x
+
TestInsufficientStorage (12.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-342182 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-342182 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.84061088s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4649fc4-755f-4b23-b6b4-f76977b104d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-342182] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"320dd852-29c6-4b7c-8939-682a0cea5a18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22049"}}
	{"specversion":"1.0","id":"8badfc97-e673-4301-bf68-451f7aead08f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7508060a-8102-4b4e-a9ec-ef28ecbe0928","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig"}}
	{"specversion":"1.0","id":"0181ea8c-12b8-4437-8891-101e13a94639","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube"}}
	{"specversion":"1.0","id":"b00b00ed-4626-4042-9f8e-dab333a53474","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b54a0240-c46b-4cee-b7b4-8c9c29608348","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2167978d-9390-4b97-8d79-ac50c39a79e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6eb1fcbd-32ce-4692-bc53-6fb46a66888b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d92f08d9-b7aa-4546-bfcd-9b2c48316b28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a12e4eb-c252-424d-a26e-5549f1490c11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0e3baf07-7ba8-4395-a3c4-49ea8a076227","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-342182\" primary control-plane node in \"insufficient-storage-342182\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f035068-03f8-41b7-8813-582731acd24f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b76a7fe7-ea98-4b18-81fe-e5275548828d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8800903e-d6ba-4d9c-8d2e-fbe11f6aabc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-342182 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-342182 --output=json --layout=cluster: exit status 7 (307.786652ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-342182","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-342182","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 09:35:02.769514  187398 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-342182" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-342182 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-342182 --output=json --layout=cluster: exit status 7 (314.3632ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-342182","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-342182","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 09:35:03.084867  187463 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-342182" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
	E1206 09:35:03.095466  187463 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/insufficient-storage-342182/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-342182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-342182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-342182: (1.992145411s)
--- PASS: TestInsufficientStorage (12.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (310.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2443162535 start -p running-upgrade-698797 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2443162535 start -p running-upgrade-698797 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.709483306s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-698797 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1206 09:40:55.755183    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:41:36.061996    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:57.331646    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:59.143428    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-698797 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.375678525s)
helpers_test.go:175: Cleaning up "running-upgrade-698797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-698797
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-698797: (2.03619212s)
--- PASS: TestRunningBinaryUpgrade (310.50s)

                                                
                                    
x
+
TestMissingContainerUpgrade (128.68s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.984866667 start -p missing-upgrade-184521 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.984866667 start -p missing-upgrade-184521 --memory=3072 --driver=docker  --container-runtime=containerd: (1m2.590443932s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-184521
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-184521
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-184521 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-184521 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.716972277s)
helpers_test.go:175: Cleaning up "missing-upgrade-184521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-184521
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-184521: (2.750361067s)
--- PASS: TestMissingContainerUpgrade (128.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239579 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-239579 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (93.494306ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-239579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239579 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-239579 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (46.326973419s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-239579 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239579 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1206 09:35:55.755007    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-239579 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.08178719s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-239579 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-239579 status -o json: exit status 2 (310.232165ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-239579","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-239579
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-239579: (2.675742451s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239579 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-239579 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.464538329s)
--- PASS: TestNoKubernetes/serial/Start (7.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-239579 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-239579 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.831224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-239579
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-239579: (1.289673924s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239579 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-239579 --driver=docker  --container-runtime=containerd: (6.775411205s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-239579 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-239579 "sudo systemctl is-active --quiet service kubelet": exit status 1 (360.051805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (55.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3853039156 start -p stopped-upgrade-904377 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3853039156 start -p stopped-upgrade-904377 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (35.870910363s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3853039156 -p stopped-upgrade-904377 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3853039156 -p stopped-upgrade-904377 stop: (1.255728429s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-904377 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1206 09:37:57.331320    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-904377 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.933861471s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (55.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-904377
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-904377: (2.007022703s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.01s)

                                                
                                    
x
+
TestPause/serial/Start (50.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-433141 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1206 09:43:58.820031    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-433141 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (50.950799987s)
--- PASS: TestPause/serial/Start (50.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-433141 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-433141 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.198865672s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-433141 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-433141 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-433141 --output=json --layout=cluster: exit status 2 (370.901045ms)

                                                
                                                
-- stdout --
	{"Name":"pause-433141","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-433141","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-433141 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-433141 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.94s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-433141 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-433141 --alsologtostderr -v=5: (2.938005627s)
--- PASS: TestPause/serial/DeletePaused (2.94s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-433141
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-433141: exit status 1 (18.462012ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-433141: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-793086 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-793086 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (200.784126ms)

                                                
                                                
-- stdout --
	* [false-793086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:45:09.157070  239096 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:45:09.157305  239096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:45:09.157333  239096 out.go:374] Setting ErrFile to fd 2...
	I1206 09:45:09.157357  239096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:45:09.157669  239096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
	I1206 09:45:09.158145  239096 out.go:368] Setting JSON to false
	I1206 09:45:09.159370  239096 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5261,"bootTime":1765009049,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 09:45:09.159512  239096 start.go:143] virtualization:  
	I1206 09:45:09.163251  239096 out.go:179] * [false-793086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 09:45:09.167317  239096 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:45:09.167397  239096 notify.go:221] Checking for updates...
	I1206 09:45:09.173429  239096 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:45:09.176423  239096 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
	I1206 09:45:09.179365  239096 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
	I1206 09:45:09.182257  239096 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 09:45:09.185129  239096 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:45:09.188495  239096 config.go:182] Loaded profile config "kubernetes-upgrade-228904": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:45:09.188614  239096 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:45:09.212754  239096 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 09:45:09.212878  239096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:45:09.281196  239096 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 09:45:09.272121439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 09:45:09.281308  239096 docker.go:319] overlay module found
	I1206 09:45:09.286184  239096 out.go:179] * Using the docker driver based on user configuration
	I1206 09:45:09.289189  239096 start.go:309] selected driver: docker
	I1206 09:45:09.289211  239096 start.go:927] validating driver "docker" against <nil>
	I1206 09:45:09.289225  239096 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:45:09.292900  239096 out.go:203] 
	W1206 09:45:09.295905  239096 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1206 09:45:09.298765  239096 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-793086 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-793086" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:37:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-228904
contexts:
- context:
cluster: kubernetes-upgrade-228904
user: kubernetes-upgrade-228904
name: kubernetes-upgrade-228904
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-228904
user:
client-certificate: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/client.crt
client-key: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-793086

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793086"

                                                
                                                
----------------------- debugLogs end: false-793086 [took: 3.371902752s] --------------------------------
helpers_test.go:175: Cleaning up "false-793086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-793086
--- PASS: TestNetworkPlugins/group/false (3.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (64.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-587884 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-587884 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m4.468324006s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (64.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-100767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1206 09:50:55.754523    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-100767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m26.722620129s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-587884 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [590e7d78-856c-452f-b127-71932ebb05d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [590e7d78-856c-452f-b127-71932ebb05d9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00369017s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-587884 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-587884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-587884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.085148578s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-587884 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-587884 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-587884 --alsologtostderr -v=3: (12.198507865s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-587884 -n old-k8s-version-587884
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-587884 -n old-k8s-version-587884: exit status 7 (90.849303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-587884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (27.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-587884 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1206 09:51:36.062073    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-587884 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (26.596083534s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-587884 -n old-k8s-version-587884
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (27.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-100767 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8e96bd12-365f-47e1-82fc-fca2d651b8bc] Pending
helpers_test.go:352: "busybox" [8e96bd12-365f-47e1-82fc-fca2d651b8bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8e96bd12-365f-47e1-82fc-fca2d651b8bc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003557882s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-100767 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-100767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-100767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050993894s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-100767 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-100767 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-100767 --alsologtostderr -v=3: (12.535658199s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-sf7qz" [a8b32c9f-1491-480f-b54f-2071c2086029] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-sf7qz" [a8b32c9f-1491-480f-b54f-2071c2086029] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003533431s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-100767 -n embed-certs-100767
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-100767 -n embed-certs-100767: exit status 7 (79.239616ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-100767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-100767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-100767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (55.297421038s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-100767 -n embed-certs-100767
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-sf7qz" [a8b32c9f-1491-480f-b54f-2071c2086029] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004193784s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-587884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-587884 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-587884 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-587884 --alsologtostderr -v=1: (1.488647777s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-587884 -n old-k8s-version-587884
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-587884 -n old-k8s-version-587884: exit status 2 (528.104013ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-587884 -n old-k8s-version-587884
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-587884 -n old-k8s-version-587884: exit status 2 (529.143784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-587884 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-587884 --alsologtostderr -v=1: (1.122972545s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-587884 -n old-k8s-version-587884
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-587884 -n old-k8s-version-587884
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mgh5l" [57de55c0-2d4c-43b9-819f-172a20bd6310] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00382539s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mgh5l" [57de55c0-2d4c-43b9-819f-172a20bd6310] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003336769s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-100767 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-100767 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-100767 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-100767 -n embed-certs-100767
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-100767 -n embed-certs-100767: exit status 2 (337.076843ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-100767 -n embed-certs-100767
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-100767 -n embed-certs-100767: exit status 2 (333.707637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-100767 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-100767 -n embed-certs-100767
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-100767 -n embed-certs-100767
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m20.928994875s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-837391 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3890c8c7-b299-4ef5-a107-7ff4a300e282] Pending
helpers_test.go:352: "busybox" [3890c8c7-b299-4ef5-a107-7ff4a300e282] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3890c8c7-b299-4ef5-a107-7ff4a300e282] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00357863s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-837391 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-837391 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.088232878s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-837391 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-837391 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-837391 --alsologtostderr -v=3: (12.138601718s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-837391 -n default-k8s-diff-port-837391
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-837391 -n default-k8s-diff-port-837391: exit status 7 (66.940848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-837391 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-837391 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (49.903725142s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-837391 -n default-k8s-diff-port-837391
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8wvwz" [1e4f0a33-8981-45e2-b67c-1d6ea79bf2c5] Running
E1206 09:55:55.754587    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003322338s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8wvwz" [1e4f0a33-8981-45e2-b67c-1d6ea79bf2c5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003727955s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-837391 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-837391 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-837391 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-837391 -n default-k8s-diff-port-837391
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-837391 -n default-k8s-diff-port-837391: exit status 2 (379.566183ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-837391 -n default-k8s-diff-port-837391
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-837391 -n default-k8s-diff-port-837391: exit status 2 (340.046845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-837391 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-837391 -n default-k8s-diff-port-837391
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-837391 -n default-k8s-diff-port-837391
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-257359 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-257359 --alsologtostderr -v=3: (1.319046171s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-257359 -n no-preload-257359: exit status 7 (67.733204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-257359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-387337 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-387337 --alsologtostderr -v=3: (1.358152578s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-387337 -n newest-cni-387337: exit status 7 (71.514529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-387337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-387337 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m19.371089119s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-793086 "pgrep -a kubelet"
I1206 10:14:10.771751    4292 config.go:182] Loaded profile config "auto-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-793086 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t9vjl" [9563017d-aafe-4ca6-8500-08c6368a0ec3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t9vjl" [9563017d-aafe-4ca6-8500-08c6368a0ec3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003365814s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-793086 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1206 10:14:42.815190    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/default-k8s-diff-port-837391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m19.329330831s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-kzdms" [7c2966e2-6348-4f34-b485-62033af66571] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003503562s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-793086 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-793086 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rr7pn" [28974c0e-f051-42ad-8d08-4036ccddc909] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rr7pn" [28974c0e-f051-42ad-8d08-4036ccddc909] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003688303s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-793086 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m0.54466275s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-9vxd4" [8715a94c-477d-4706-861f-5d96c14d2985] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004150858s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-793086 "pgrep -a kubelet"
I1206 10:17:45.951189    4292 config.go:182] Loaded profile config "calico-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-793086 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8dzjm" [3ae32e25-1d60-4147-aa78-71b4e6528535] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8dzjm" [3ae32e25-1d60-4147-aa78-71b4e6528535] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.008205508s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-793086 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.444830547s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-793086 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-793086 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nkxjj" [8971bb32-6088-4b51-a778-96e87cb1143a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nkxjj" [8971bb32-6088-4b51-a778-96e87cb1143a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003634615s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-793086 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (42.738019282s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-793086 "pgrep -a kubelet"
I1206 10:20:33.806403    4292 config.go:182] Loaded profile config "enable-default-cni-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-793086 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b6h46" [d555fef3-e376-48b6-aee4-ed582d0e9624] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b6h46" [d555fef3-e376-48b6-aee4-ed582d0e9624] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00382453s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-793086 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m1.898626247s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-8p84h" [5837b644-735c-4c21-b05a-3be0a7547b57] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003841569s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-793086 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-793086 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nw4zh" [bbe44546-2f87-4850-9f6f-d40b230513f5] Pending
helpers_test.go:352: "netcat-cd4db9dbf-nw4zh" [bbe44546-2f87-4850-9f6f-d40b230513f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nw4zh" [bbe44546-2f87-4850-9f6f-d40b230513f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004058096s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-793086 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1206 10:22:39.633017    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:39.639509    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:39.651311    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:39.673613    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:39.715695    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:39.797027    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:39.959105    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:40.280365    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:40.406631    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:40.922280    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:42.204598    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:44.769032    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/calico-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-793086 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m12.494962878s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-793086 "pgrep -a kubelet"
I1206 10:23:42.129119    4292 config.go:182] Loaded profile config "bridge-793086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-793086 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v4cvh" [2bf8bf5e-8555-4531-b2e2-361258f7d80b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v4cvh" [2bf8bf5e-8555-4531-b2e2-361258f7d80b] Running
E1206 10:23:45.772333    4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kindnet-793086/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.002882392s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-793086 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-793086 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.45
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.19
392 TestNetworkPlugins/group/kubenet 3.64
400 TestNetworkPlugins/group/cilium 3.93
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-525213 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-525213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-525213
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-507319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-507319
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-793086 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-793086" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:37:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-228904
contexts:
- context:
cluster: kubernetes-upgrade-228904
user: kubernetes-upgrade-228904
name: kubernetes-upgrade-228904
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-228904
user:
client-certificate: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/client.crt
client-key: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-793086

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793086"

                                                
                                                
----------------------- debugLogs end: kubenet-793086 [took: 3.470316016s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-793086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-793086
--- SKIP: TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-793086 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-793086" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:37:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-228904
contexts:
- context:
cluster: kubernetes-upgrade-228904
user: kubernetes-upgrade-228904
name: kubernetes-upgrade-228904
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-228904
user:
client-certificate: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/client.crt
client-key: /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/kubernetes-upgrade-228904/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-793086

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-793086" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793086"

                                                
                                                
----------------------- debugLogs end: cilium-793086 [took: 3.775159774s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-793086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-793086
--- SKIP: TestNetworkPlugins/group/cilium (3.93s)

                                                
                                    
Copied to clipboard